Está en la página 1de 70

Repurposing the Word Processor for Reprocessing Behaviors:

Prompting Questions Prompting Questions

Curtis Jay Bonk


West Virginia University
Department of Educational Psychology
609 Allen Hall
Morgantown, WV 26506-6122
(304) 293-2515

and

Thomas H. Reynolds
University of Wisconsin-Madison
Educational Sciences Building
1025 W. Johnson Avenue
Madison, WI 53719
(608) 263-4221

See also:
Reynolds, T. H., & Bonk, C. J. (1992). Repurposing the word processor. In N. Estes & M. Thomas
(Eds.), Proceedings of The Ninth International Conference on Technology and Education "Sans
Frontieres" (vol #1) (pp. 535-537). The University of Texas, Austin, TX.

The research reported here was supported in part by dissertation


research grants obtained by the first author from the American
Psychological Association (APA) and the Wisconsin Alumni Research
Foundation (WARF). The authors wish to thank Gary Davis, Steve
Yussen, Richard Lehrer, Jack Kean, Joan Littlefield, Mary Gomez,
Marty Nystrand, Jim Middleton, and Todd Franke for support and
guidance. The two projects described here also were indebted to
Robert Gilpatrick, Nancy Holty, Cheryl Hofstad, Cathy Salapa, Barb
Seeboth, Brad Hughes, Linda Tate, Tom Curtis, and the many other
students and teachers who made this research possible.
Portions of this paper were presented at the Annual Meeting of the
American Educational Research Association, April, 1990, Boston, MA
and the Spring, 1991 conference of the National Council of
Teachers of English (NCTE), Indianapolis, IN.
Repurposing the Word Processor for Reprocessing Behaviors:
Prompting Questions Prompting Questions

Abstract--During the past decade, a number of researchers have


used writing prompts to investigate expert-novice differences in
writing. Research on the usefulness of computerized procedural
facilitation has indicated that middle school students may benefit
from such temporary supports because of their need for executive
control of writing processes. Both computer and noncomputer-based
research on procedural facilitation support is grounded in
Vygotskian notions of mediated instruction within one's zones of
proximal development to promote strategy internalization. In
effect, computerized prompts and cues are intended to facilitate
novice writer development from a "knowledge telling" mode to a
more advanced "knowledge transforming" stage. Still, the results
of previous research provide mixed support for this framework.
The purpose of this research was to examine the usefulness of a
generative-evaluative model of computerized procedural supports,
comparing children's writing in the sixth-, seventh-, and eighth-
grades to that to college students. The results from procedural
facilitation and keystroke mapping/replay tools indicated that the
college student population benefited more from prompt cues than
younger students who were exposed to the prompts over a longer
period of time. Evaluative types of prompts were used more often
and more productively by college students, resulting in increased
writing quality. In contrast, middle school students favored
generative heuristics, but with less effect on product quality.
As predicted by the generative-evaluative model, the younger
knowledge telling students used the prompts to help them unlock
more content, not as a strategic tool for enhancing the cohesion
of their texts. A comparison between middle school and college
students of prompt timing, usage rate, and productivity was a
major focus of this investigation.

Introduction

What follows from this document is the introduction of a model

that details how procedural facilitation (Bereiter & Scardamalia,

1987) or assisted-prompting might refocus the less experienced

writer on important generative and evaluative processes within a

1
reprocessing environment. This model is a preliminary attempts to

account for and define cognitive processes impacted by slight

alterations in the writing environment.

Micrgenenetic Analyses

Writing behaviors can be tracked and analyzed according to

temporal sequence or form of writing transaction (pretext--

typically without written notation or formal transcription; see

Witte, 1985), point of inscription shaping or transcription of

one's current thoughts in working memory (Matsuhashi, 1987); and

altering thoughts or previously transcribed plans through forays

back into text (Matsuhashi, 1987). Writing software can be

repurposed to uncover writing processes often hidden in pen and

paper activities and verify the later two behaviors. For

instance, computer-based writing tools like keystroke mapping may

be instrumental in close analyses of revisionary tactics. In

addition, one might use keystroke replay to detail how writers

might utilize computer prompt suggestions, differentiating between

strategies used at the point of inscription and those pertaining

to previous segments of text (Matsuhashi, 1987). Nevertheless,

this minute analysis also forces difficult decisions in

documenting global and local writing strategies (Matsuhashi, 1987;

Smith & Lansman, 1989).

The best explanations for the successfulness of new computer-

based writing tools is derived from the cognitive process model

2
from Flower and Hayes (1981) and is well grounded and widely

endorsed. During the past decade, researchers have investigated

this model through the creation and testing of the effectiveness

of computerized prompts on children's writing (Bereiter &

Scardamalia, 1982, 1987; Daiute, 1985; Montague, 1990). In

theory, prompting strategies, or "procedural facilitators," allow

learners to approach the writing task by using strategies that

involve higher-level aspects of their problem solving processes

such as planning, checking, evaluating, and goal-setting. But,

until recently, a comprehensive model of computer-based prompted

writing did not exist. The goal of the two research projects

reported here was to build on prior procedural facilitation work

while attempting to develop and test a model of prompted writing

(though presented briefly later in this paper, see Bonk (1989), or

Bonk & Reynolds (in press), for a more detailed description of our

generative-evaluative model of writing).

In parallel to the generative-evaluative model driving the

research reported here, Gavriel Salomon (1988) described how

computerized prompts invoked during the writing process could

facilitate metacognitive awareness of diverse writing strategies.

Salomon claimed that computer prompting programs act as

intellectual partners that can pose questions within writers'

zones of proximal development (defined later). In Salomon's

studies, prompts, or procedural facilitators, act as temporary

supports for student thoughts, thereby encouraging internalization

3
of these strategies; a rationale essentially drawn from Vygotskian

(1978; 1986) theories of learning and development.

Vygotskian Views Applicable to Writing

Vygotsky's (1978; 1986) speculations about learning and

development provided fruitful bases for many contemporary studies

pertaining to writing, computers, and cognition. Vygotsky argued

that social interaction with adults and more experienced peers is

a mechanism to guide learners in reaching a developmental

potential that they might not ordinarily attain; he later labeled

this "the zone of proximal development" (Wertsch, 1985b; Vygotsky

1978; 1986). He used this term to denote "the distance between a

child's `actual developmental level as determined by independent

problem solving' and the higher level of `potential development as

determined through adult guidance or in collaboration with more

experienced peers'" (Wertsch, 1985b, p. 67). Social interactions

and instruction are targeted to occur ahead of the child's

actually completed developmental level.

Vygotsky viewed activities children could perform with the

assistance of others as more indicative of their mental

development than items they could solve on their own. For

instance, two children write at the same independent or

intrapsychological level, but could vary in the degree to which

they can take advantage of adult, peer, or computer forms of

assistance (Wertsch, 1985a). Important for the neo-Vygotskian,

therefore, was how graduated aids and props like computer software

4
could uncover the child's potential or readiness to perform at a

higher level of functioning (Brown & Ferrara, 1985).

For Vygotsky, a great deal of development was due to expert

mediation (Brown & Palincsar, 1989). For example, recent

propagation of the terms cognitive apprenticeship and scaffolding

are based on expert modeling of an activity, and gradual ceding

control of the task over to a novice (Brown et al., 1988, 1989;

Schoenfeld, 1988). Restated, scaffolding refers to the support

provided by a teacher, expert, more capable peer, or computer tool

when intentionally interacting within the learner's zone of

proximal development, thereby extending skills to higher levels of

functioning (DiPardo & Freedman, 1988; Palincsar, 1985; 1986).

Confrontation and conflict generated by peer (or computer software

package) can trigger cognitive dissonance or disequilibrium

(Inhelder & Piaget, 1958) causing individuals to seek additional

information to resolve the conflict (Palincsar & Brown, 1987).

For instance, the goal of teacher conferences and peer response

groups in writing is for less experienced writers to witness

expert questioning and then pose similar questions to themselves

(Daiute, 1986a; Freedman, 1987).

Salomon (1988) uses Vygotsky's ideas of zones of proximal

development, mediated instruction, and internalization of tool-

based instruction to claim that interactions with a supportive

partner could include a computer because this tool is vital to

interpersonal processing--computers change one's relationship to

5
the task and to the world (see, also, Higgins et al., 1990).

Consequently, the user is posited to internalize the intelligence

displayed (or seemingly displayed) by computer tools (see Bonk,

1989, for additional Vygotskian linkages concerning procedural

facilitation in writing).

Related Writing Research

In order to design tools and strategies appropriate for

internalization, writing development must be analyzed. For

instance, many difficulties of young children in writing emerge

from their lack of an executive routine to perceive the dissonance

between what they have written and what they intend to write. In

addition, less successful writers fail to efficiently handle the

switching between generating and evaluating ideas (Flower & Hayes,

1981). A high level executive routine is the most valuable

component of text production, since it directs and regulates the

whole writing process (Bereiter, 1980). Consequently, interactive

computer assistance may be useful both for the encouragement of

new ideas and also for the evaluation of them.

Children often do not have the benefit of feedback from a

conversational partner when they enter the domain of writing

(Bereiter, 1980). Without support, text produced from young

writers appears egocentric and immature because they are operating

in what Bereiter and Scardamalia (1985, 1987) label the "knowledge

telling" problem space. Here, children concentrate mainly on the

words they want to say without framing them according to discourse

6
conventions or audience expectations (Bonk, 1990; Scardamalia,

Bereiter, & Steinbach, 1984); composing text in the order

generated. Not surprisingly, since their major goal is to get

content on paper or externalize their knowledge, the slightest

prompting persuades knowledge tellers to lengthen their texts

(Bereiter & Scardamalia, 1985). As alluded to earlier in Figure 1

and reemphasized later in Figure 2, an intermediary stage of

writing development between knowledge telling and knowledge

transformation is exemplified by the movement between rhetorical

and content problems and subgoals (i.e., converting random

thoughts into cohesive and interpretable text in accordance with

the intended reader perspective). This developmental gap is

apparent when the young or inexperienced writer is dependent upon

support strategies (e.g., cue cards or think sheets), concentrated

self-cuing, or more direct outside intervention (e.g., teacher or

computer suggestions) in order to generate extended written

discourse.

Procedural Facilitation

External influences like computer software can encourage the

writer to move from knowledge telling toward incorporating more

mature monitoring and switching mechanisms. For instance,

Bereiter and Scardamalia (1987) suggested an intervention

technique, procedural facilitation, to overcome knowledge telling

approaches; this intervention oversees the overall executive

procedure while providing cues or routines for switching between

7
text generation and revision. Scardamalia and Bereiter (1985, p.

566) defined procedural facilitation as "routines and external

aids designed to reduce the processing burden involved in bringing

additional regulatory mechanisms into use;" in effect, decreasing

task demands in order to refocus the writer on higher-level

composing concerns. In procedural facilitation, the intelligence

remains in the writer; the prompts or supports only present

information, notes, comments, or questions to be responded to by

the user/writer. Basically, procedural facilitation augments

individual cognitive processes in writing even though it is

nonspecific and does not address the actual substance of what the

student is composing.

One of the key goals of the prompts is to help inexperienced

writers become more reflective when approaching a writing task.

Procedural facilitation can offer assistance in all the following

ways: designing appropriate plans and goals, content generation,

attending to structural elements of text, evaluating text produced

against plans and goals, diagnosing and operating on textual

inconsistencies and problems, producing a coherent whole, and,

more generally, increasing writing sophistication (Scardamalia &

Bereiter, 1985). In effect, procedural facilitation is designed

to take the learner's attention away from distracting mechanical

demands of writing and center it on cognitive and metacognitive

aspects of writing (Bonk, 1989; Bonk & Reynolds, in press;

Scardamalia & Bereiter, 1985), thereby making the composing

8
process more dialectic.

To initiate evaluation of interactive models of the writing

process, some researchers have investigated the effectiveness of

prompts on young and/or immature writing (Bereiter & Scardamalia,

1982; Bonk & Reynolds, in press; Daiute, 1985). Though a variety

of prompting approaches are advocated to overcome the lack of a

conversational partner in writing, thereby compensating for

underdeveloped writing strategies, the production of significant

amounts of additional text when prompts are available is not

equated with higher quality writing performance; "more is not

necessarily better" (Scardamalia, Bereiter, & Goelman, 1982).

Much of the research on procedural facilitation in writing has

addressed middle school students (Bereiter & Scardamalia, 1981;

Daiute & Kruidenier, 1985; Graves, Montague, & Wong, 1990;

Isaacson & Mattoon (1990); Salomon, 1988). As alluded to earlier,

however, often subjects' enthusiasm for the method is not

paralleled by the production of higher quality texts or the

selection of more sophisticated revisionary tactics. The response

of researchers is that they are not concerned with immature

writers improving upon their existing writing strategies and

procedures, but rather that they significantly alter their writing

processes (Bereiter & Scardamalia, 1981). In any event, a number

of these studies lack experimental rigor (e.g., the treatment

length is limited), or the processes and products often proclaimed

as significantly different fail to fit models of writing presented

9
or are conjured up as secondary considerations or "exploratory

data analyses" after the initial research hypotheses prove

insignificant (Bereiter & Scardamalia, 1987; Carey, 1988; Daiute

1985, 1986b; Scardamalia et al., 1984; Woodruff et al., 1981).

Finally, though researchers allude to the importance of switching

between generative and evaluative aspects of text, procedural

facilitation research has yet to incorporate both generative and

evaluative prompting support (see Bonk, 1989, for a synthesis of

findings in procedural facilitation in writing).

The use of the computer may help students lacking executive

support in their writing, or assist the typical young reader or

writer who takes a surface or regurgitation approach to learning

(some would argue that over two-thirds of the student body fits

the later description; see Iran-Nejad, 1990). In effect, the

right form of assistance from the computer might stimulate

increased student planning and revising when writing.

A number of researchers have designed computer tools not meant

to be intelligent, but, instead, built to amplify the intelligence

of the user through guidance or procedural advice (Adams et al.,

1990). For instance, Scardamalia et al. (1989) designed a tool

referred to as CSILE (Computer Supported Intentional Learning

Environments). CSILE supports student intentional learning and

goal setting (e.g., students construct, evaluate, and interrelate

knowledge, ideas, and comments on personal or peer knowledge bases

through notes, pictures, graphs, time-lines, and maps). There are

10
many other examples of software tools and composing supports that

facilitate skills in writing and other domains; tools that extend

writing tools designed to guide students through phases or aspects

of the composing process (Montague, 1990). Overall, the purpose

of these systems is to make the user as intelligent and confident

as possible.

But, does it make sense for a computer to guide students in

monitoring their thoughts as they plan, generate, and revise text?

There are no clear answers here. Studies in procedural

facilitation have employed computer prompting programs that

offered: (1) contentless prompts or sentence openers (Woodruff,

Bereiter, & Scardamalia, 1981), (2) advisory heuristics on

revisionary processes while promoting internal dialogue (Daiute,

1985; 1986b; Daiute and Kruidenier, 1985), (3) prompts embedded

within different computer environments (e.g., outliners, word

processors, and knowledge building tools) (Kozma, in press), and

(4) prompts that were unsolicited and random (Salomon, 1988).

Naturally, there are a number of guidelines for using

procedural facilitation recommended by the above researchers.

Basically, it is important that students view procedural

facilitation on computers as a useful intellectual skill, not as a

temporary trick, (Paris, 1988; Scardamalia & Bereiter, 1986), feel

ownership in using it (Scardamalia, Bereiter, & Steinbach, 1984),

are motivated to it due to reputable expert modeling (Paris,

1988). Still, for procedural facilitation to be maximally

11
effective, it must be used where there is an excessive burden on

executive processes in composing and where students' know more

about quality products than their limited procedures allow them to

produce (Bereiter & Scardamalia, 1982). Finally, assessments of

the effectiveness of the prompts requires that they be tracked so

that appropriate conditions for cuing can be derived (Swallow et

al., 1988). Because prompt effectiveness in previous studies

remains unclear, determining conditions for cuing, specific prompt

usage patterns, and average prompt productivity rates are key

purposes of this study.

A Model of Generating and Evaluating Text

Procedural prompts perform a critical function in refocusing

students' attention to the interaction between generative and

evaluative processes. In a reprocessing environment, wherein

writing is recursive, not linear, students should be most

concerned with aspects like ideational development, organization,

coherence, relevancy, elaborateness, and overall flow of their

ideas, not mechanics (e.g., spelling or grammar) (Bereiter &

Scardamalia, 1987). When inexperienced writers are permitted to

operate in the insufficiently controlled or spontaneous level of

writing, they tend to "downslide" toward the production of random

text--the knowledge telling level. In effect, they are pulled

into the local demands of topic directed content generation and

typically edit for mechanical mistakes (Bruce, Collins, Rubin, &

Gentner, 1982). In our model, generative and evaluative prompts

12
are noted as one way to shift the inexperienced writer's attention

to higher levels of processing and control in the writing

hierarchy, promoting reflection and knowledge transformation (see

Figure 2; see also Bonk (1989) or Bonk & Reynolds (in press) for

more details on the generative-evaluative model of prompting).

Thus, the knowledge teller is encouraged to incorporate more

mature stages within her writing through computer interactions and

question modeling.

Both generative and evaluative processes are critical to every

aspect of the writing process. Nevertheless, generative processes

are emphasized in planning and initial text generation (Caccamise,

1987), while evaluative processes are used at less frequently

during text generation--though during critical moments--to accept,

alter, or reject one's initial plans and ideas. Similarly,

evaluative processes are emphasized during later drafts, even

though new textual ideas and sequences and plan reorganizations

may evolve from evaluative operations. Within this model,

therefore, effective composing (and thinking) is viewed as both a

creative and critical thinking process (Bonk, 1988; Flower &

Hayes, 1981; Isaksen & Parnes, 1985).

--------------------------------------------------------------

Insert Figure 1 about here

---------------------------------------------------------

The generative-evaluative model also indicates that there are

hierarchical levels within writing that function together to

13
enhance writing quality. The critical features operating across

these levels of composing are the reprocessing framework and the

combined generative and evaluative organizing and reorganizing of

one's plans, goals, and text produced. Collectively, the prompts

were designed to by-pass inclinations to address mechanics or

cosmetic revisionary tendencies while supporting higher-order

concerns of knowledge transformation.

Rationale and Focus

As indicated earlier, there were a number of stringent controls

placed within these studies to accommodate criticisms of prior

procedural facilitation in writing research. First, a model for

procedural prompts was developed and tested. Next, the prompting

program was piloted with a small pool of subjects who rated the

usefulness of each proposed prompt. Unlike prior studies that

focused on a stage or category of writing, prompts in these two

addressed both the planning (generative) and revising (evaluative)

aspects of composing. Prompt modeling and think sheets were

incorporated into the intervention to enhance the instructional

effectiveness. The impact of procedural facilitators was analyzed

across a number of grade levels. Two types of explorations of the

impact of prompts were used and compared: a thirteen-week/five

paper intervention with the middle school students (though the

prompts were only available for six weeks or three papers) and a

ten week/single paper intervention with college students, in which

the prompts were only available during the two composing sessions.

14
Students were not randomly forced to use the prompts but were

asked to use them during of "naturally occurring reflective

moments of the composing process." The precise calculation of

prompt effectiveness was determined and compared through a later

replay of composing session keystrokes (Bridwell, Nancarrow, &

Ross, 1984; Reynolds & Bonk, 1990b).

Design of the Study

Middle School Sample

A total of 164 middle school students in a rural Midwestern

school, 53 sixth-, 52 seventh-, and 59 eighth-grade students took

part in a 13 week course one electronic writing. After three

weeks of training on the word processing program, students were

blocked according low and high writing ability using Metropolitan

Achievement Test scores and were then randomly assigned within

each grade to treatment or control groups. During the remaining

ten weeks, students wrote five expository essays (two sessions per

paper); the prompts were available to the treatment group during

papers 2, 3, and 4 (i.e., a total of six sessions). The

assignment reviewed in this paper is the midtreatment assignment

concerning recommendations and suggestions to President Bush

regarding items to place inside a time capsule honoring for former

President Reagan near their school (for more task and design-

related information, see Bonk, 1989). As indicated below,

keystroke data was the primary means for determining prompt usage

and effectiveness rates. Because of the time intensity of

15
keystroke evaluations, a randomly selected subgroup of 10 sixth-,

seventh-, and eighth-graders in the treatment group were analyzed

(30 middle school students split by ability). Though available,

keystroke analyses on another 30 control subjects are not reported

here since these students did not have access to the computer

prompts (the focus of this paper), nor is keystroke data divided

between low- and high-ability students due to limited sample size.

College Sample

Twenty-four junior and senior college students enrolled in an

intermediate composition course at the UW-Madison were randomly

divided between treatment and control conditions. The twelve

subjects in the treatment condition were allowed to use the

generative-evaluative prompts during the two sessions allocated to

complete the in-class writing assignment. Only 12 college

students in the experimental group had access to the computer

prompts, thereby limiting the sample of computer prompted writing

behavior investigated through keystroke mapping. The writing task

was an expository essay regarding the pros and cons of the

workshopping method (for more information on this writing task,

see Reynolds and Bonk, 1990a).

Procedural Facilitators

The procedural facilitators used in both studies were grouped

by generative and evaluative thinking skill (see Table 1), and

based on cognitive process models of writing and analyses of prior

prompted writing interventions. Generative prompts, created to

16
expand ideas and suggest new ideas and perspectives, included

categories for fluency or number of ideas, flexibility or

different approaches to a problem, originality or innovativeness,

and elaboration of an idea. Younger students knew the generative

prompts as "more ideas," "types of ideas," "new ideas," and

"extenders." In contrast, evaluative prompts (relevancy


relevancy of

information, logic or clear flow, assumptions or recognizing bias,

and conclusions)
conclusions were devised to help the writer focus on sentence
clarity and overall organization of written text. Prompts were

drawn from psychological and educational assessment measures on

critical and creative thinking (Talbot, 1986; Torrance, 1974) and

previous research on computer-assisted self-monitoring (Daiute and

Kruidenier, 1985; Woodruff et al., 1981).

----------------------------------------------------------------

Insert Table 1 about here

-----------------------------------------------------------------

A total of 24 prompts were organized from left (generative) to

right (evaluative) in a matrix on the computer keyboard.

Templates placed at the top of the keyboard (and paper think

sheets to the right of the computer for the middle school

students) indicated the keyboard location of generative and

evaluative prompt categories used to encourage switching between

text generation and evaluation. To nurture self-regulated

writing, prompts were available at any point in the composing

process. A 2 or 3 control code sequence invoked a prompt which

17
appeared in the bottom two lines of the screen in the form of

questions or statements for the writer to consider in regards to

his paper (e.g., "Reread your last paragraph. Would expanding or

adding a sentence help your reader understand?").

Measures

Expository tasks were used since they required both the

generative/inventive and evaluative/revisionary processes

encouraged by the prompts (Ruth & Murphy, 1988; Bereiter &

Scardamalia, 1987). Three primary evaluation instruments were

used to examine writing products and processes. First of all,

revisionary tactics were analyzed through the replay of keystrokes

using a modification of Bridwell's Revisionary Classification

Scheme (Bridwell, Sirc,& Brooke, 1985; Reynolds & Bonk, 1990b).

The primary revision factors analyzed were surface revisions

(format, spelling, grammar changes; and meaning-preserving word

changes), meaning making revisions (text-based changes at the

word, phrase, sentence, and multisentence level),1 and textual

repositioning or scrolling.

While recording revisionary behaviors of students using the

prompts, data on prompt frequency, order, and effectiveness was

gathered. For instance, prompting behaviors were cataloged based

1
We do not assert here that phrase, sentence, and
multisentence changes coded as "meaning level revisions" will
always impact favorably on text--some changes may, in fact,
detract from the overall meaning generated!!! Thus, we are
currently reanalyzing our data to begin to understand this issue
(i.e., whether "meaningful revisions are really meaningful").

18
on whether they occurred in the first or second composing session.

In addition, each prompt record was further coded if it was

followed by a textual sequence that paralleled the intent of the

prompt. For each composing session, accounts were maintained of

the number of times individual prompts were invoked as well as a

tally of total prompts used. Analyses were recorded by session

and grade level (though the temporal sequence within each session

was not separately noted). Finally, the amount of time each

writer spent pausing, composing, revising, and scrolling after

each prompt was not available from this particular keystroke

mapping procedure (see Reynolds and Bonk, 1990b for further

details).

Both the prompting and keystroke mapping systems were developed

through the use of WordPerfect macro function capabilities. As

indicated, prompt usage, timing, and productiveness (prompt

causing meaningful changes) were recorded and analyzed.

A number of other measures were also incorporated in both the

middle school and college studies. For analytic rating of papers,

a six-point holistic scoring scale from the Educational Testing

Service was used as well as a modified version of Purves'(1985)

Analytic Scheme for Critical Thinking (an eight category, five-

point scale paralleling the eight prompt categories used in these

studies). In addition, students' attitudes regarding the computer

as a partner or more expert peer in their writing was evaluated in

both experiments. Middle school students also completed three

19
other instruments: (1) open-ended questions concerning the giving

of advice in writing and reading (Englert, Raphael, Fear, &

Anderson, 1988; Salomon, 1988); (2) the Index of Writing Awareness

(a twenty-item multiple choice instrument; see Bonk, Middleton,

Reynolds, & Stead, 1991); and (3) a thirteen-item prompt sort

task. These instruments were incorporated to assess the degree of

prompt internalization.

Results

As noted earlier, more detailed instrument descriptions and

results of these studies (including interrater reliability checks

on the dimensional and holistic scoring) are reported in Bonk and

Reynolds (in press) and Reynolds and Bonk (1990a). Briefly

stated, however, in the study of middle school students, the

treatment group failed to improve their writing performance on the

posttest as a result of exposure to the prompts over a six-week

period or even display signs of improvement when the prompts were

available. In addition, the treatment group did not perform

significantly better than the control group on any of prompt

internalization measures, including the critical prompt sort task.

However, middle school students did reduce the amount of surface

revisions they made when the prompts were available, and,

correspondingly, they also increased the frequency of their

textual scanning/scrolling (note also that surface revisions were

negatively correlated with holistic scores while repositioning was

positively correlated with holistic scores). However, because no

20
differences were found in meaningful revisions, it appeared that

they read the prompts and scanned the text to diagnose suggested

problems, but did not know how to carry forward and operate on

potential problems. The computer attitudes questionnaire,

nonetheless, revealed that the treatment group considered the

computer more helpful in creating a sense of audience and in

providing opportunities to evaluate their compositions.

Grade level analyses indicated a shift in writing awareness and

performance between grade six and seven (see Bonk et al., 1991).

Coincidentally, the sixth-graders used the prompts most

productively and had the most positive attitudes regarding the

computer program. In fact, only the sixth-grade high-ability

treatment group displayed significant differences on holistic and

dimensional scoring as compared to its respective control group

and pre- to post-test difference scores. Thus, procedural

assistance may have been most appropriate at the sixth-grade

level.

In contrast, the study of college writers used the prompts for

one paper--two sessions--wherein the treatment group subjects

carried out more meaningful operations, but did not change their

surface level or repositioning behaviors. In effect, the

meaningful changes favorably influenced the text produced, as the

treatment group produced higher quality texts than the control

group. Additional analyses indicated that the prompts impacted

most on the evaluative dimensions of the Purves dimensional

21
scoring instrument. Later replay of composing sessions clearly

showed that the evaluative prompts were used more frequently and

more effectively by the college students than the generative

category. Attitudes questionnaires revealed that the college

students endorsed the system as a useful adjunct to the writing

environment, but only half responded that they found the computer

to be a partner in the writing process. Although they advocated

for an expanded system, with more diverse, content specific

prompts, treatment subjects reported that they were aware of using

their own internal prompting system--suggestive of preexisting

inner dialoguing when writing.

The contrasting findings between the two studies was puzzling.

Considering the Vygotskian framework, we hypothesized that the

younger subjects would benefit more from the computerized

generative-evaluative prompting since they lacked the

metacognitive guidance that the prompts provided. Moreover, the

longer prompt exposure of the middle school intervention was

deemed necessary for prompt internalization. In partial support

for this hypothesis, the younger treatment subjects responded that

the computer was indeed a partner in their writing, especially the

sixth-graders who scored the lowest in metacognitive ability.

Surprisingly, in contrast to student attitudes, just the

opposite writing performance occurred than what would be expected;

the college writers wrote higher quality essays when exposed to

the prompts, while younger students were not helped. Notice that

22
sixth-grade students who used prompts in a similar fashion to the

college students (i.e., they were more balanced in prompt

selection and also more productive during the second composing

session), were the only group to display gains in writing

performance as a result of the prompts. In order to situate our

findings within existing research on procedural facilitation,

there is a need for further interpretation and explanation of what

occurred in these studies. More detailed analyses of the

instructional effectiveness of the prompts may uncover hidden

situational variables, hopefully providing reasonable explanations

for this dilemma.

-----------------------------------------------------------------

Insert Table 2 about here

-----------------------------------------------------------------

Overall Prompt Usage

In exploring why the unanticipated results occurred, the

individual prompts and prompt categories invoked were reanalyzed

for frequency of use and productivity by grade and by session.

First of all, we should note that there were no overall

differences in prompt utilization between middle school and

college students (see Table 2). The 466 prompts invoked by the 30

sixth-, seventh-, and eighth-graders keystroke analyzed converted

to 15.5 prompts per student per paper (7.7 per student per

composing session) or about 1 prompt every 6 minutes of composing

effort (Note: each session was approximately 45 minutes).

23
Similarly, the 203 prompts invoked by the twelve college students

in the treatment group converted to 16.9 prompts per student per

paper (8.5 per student per session). And since each session was

one hour long, this converts to about 1 prompt every 6 minutes. A

t-test comparing middle school to college students in total prompt

usage was nonsignificant (t2,40 = 1.31, p < .56). Thus, if the rate

of use fails to differ (at least explicitly), then either the type

of prompt invoked, the timing of prompt selection, and/or the

overall productivity rate must account for some of the performance

differences in these studies.

Overall Prompt Effectiveness and Timing of Use

Though Table 3 highlights differences between grades on all

three factors mentioned above, distinct occurrence/timing

differences in prompts use between middle school and college

students is evident. For instance, in the first session of

composing, middle school students invoked generative prompts over

two-thirds of the time, while the college students, though

favoring generative assistance, were more balanced. In the final

session, the younger students again favored generative prompts by

nearly a 2:1 margin, while, in contrast, the college students

selected evaluative prompts by over a 2:1 margin (see Figure 3 for

the total prompt category selection behaviors within each age

group and Figure 4 for age group prompt productivity divided by

composing session). In addition, it is interesting that middle

school students prompted more in session one (which, as mentioned,

24
favored generative assistance), but the college students invoked

additional prompts (typically evaluative prompts) more often

during session two when finishing their papers. The prompt per

student data provided in Table 3 and a series of t-tests described

below clarifies differences in prompt selection.

Nondirectional t-tests with an alpha protection level of .05

were performed to determine if there were differences in the

timing of prompts used overall or by major category: generative or

evaluative. First of all, analysis of within group writing

behaviors revealed that middle school students prompted

significantly more during the first session than they did during

the second session (session #1: X = 8.6, SD = 2.8; session #2: X =

6.6, SD = 3.8; t2,58 = 2.25, p < .03.), while the college level

writers did not differ significantly in the number of prompts

selected per session. Investigating prompt behaviors between

middle school and college student groups showed that the groups

did not vary in the total prompts selected in session one (t2,40 =

1.31, p = .20), but the college students' raw prompting behavior

was higher in session two approaching significance (College: X =

10.5, SD = 8.9; Middle School: X = 6.6, SD = 3.8; t2,40 = -1.95, p

< .06, n.s.).

As indicated, a comparison was also made of the

occurrence/timing of generative and evaluative prompt selection

(see Figure 5). Categorical segregation of the first and second

session prompting data indicated that middle school children used

25
generative prompts significantly more often than evaluative prompt

assistance in session one (X = 6.2 generative, 4.1 evaluative; t1,29

= 3.12, p < .004) and approached significance in session two (X =

4.1 generative, 2.5 evaluative; t1,29 = 1.74, p < .09). Though the

college students appeared to switch from generative prompt

emphasis in session one to evaluative prompting in session two, in

neither situation was one of these prompt categories relied on

more heavily. After totaling prompting behaviors across both

sessions, within group analyses found that middle school students

selected generative prompts significantly more often when writing

their papers than evaluative prompts (session #1: X = 10.3;

session #2: X = 5.2; t1,29 = 3.33, p < .002), while college level

students, who tended to favor evaluative prompts, did not

significantly select one of the prompt categories while composing

(t1,11 = -1.05, p = .31).

Analyses also were performed to compare prompt category usage

between middle school and college students (see Figure 6). While

prompting behaviors were not significantly different between

college and middle school students during session one, prompting

data from session two indicated that there were significant

differences in type of prompt selected by these groups; college

students favored evaluative prompts while middle school students,

in comparison, continued to invoke prompts from the generative

category (see Table 3, second session: t2,40 = 2.32, p < .03). Not

surprisingly, in comparing total prompt usage of these two groups

26
across the two sessions, analyses again revealed that middle

school students invoked significantly more generative prompts than

college students (t2,40 = 2.66, p < .01).

----------------------------------------------------------------

Insert Table 3 about here

-----------------------------------------------------------------

Prompt Productivity by Major Category and Session

Through the replay of keystrokes it was also possible to

investigate more specific aspects of prompt productiveness (i.e.,

prompts causing a meaningful change coinciding with the intent of

the prompt). Descriptive statistics in Table 4 indicate that the

middle school students made more productive use of the generative

prompts selected during session one, while session two was more

balanced. Over both sessions, the generative prompts accounted

for over 58% of those prompts facilitating meaningful textual

changes for the middle school students. In contrast, the college

writers found the evaluative prompts more effective, especially

during the second session. In fact, over 70% of prompts spurring

meaningful changes were evaluative. Still, both groups appeared

to switch from reliance on generative prompting for ideas during

session one toward greater dependence on evaluative prompts in

session number two. For instance, the college students relied on

generative prompts by a 2:1 ratio (productively speaking) during

the first session for actual meaningful textual additions or

changes, only to later switch to evaluative prompts by over a 5:1

27
ratio (see Table 4) during the final session. In fact, nearly

one-fourth of evaluative prompts invoked by the college students

during session two caused direct meaningful changes to text as

coded by the adapted version of the Bridwell et al. (1985)

revisionary classification scheme.

-----------------------------------------------------------------

Insert Table 4 about here

-----------------------------------------------------------------

These descriptive statistics indicated that although the total

productive rate of the prompts was similar across age groups,

prompt productivity appeared to vary by session and prompt

category invoked. In comparing the percentage of prompts

facilitating meaningful changes with the 14% overall prompt

productivity rate across college and middle school students (refer

back to Figure 4), the younger students, though appearing to make

slightly more productive use of the prompts in the first session,

were not significantly more productive (t1,29 = .57, p = .58). In

contrast, the college students approached significance in prompt

productivity during session two (t1,11 = 2.0, p < .07; ns). Note,

however, that during neither of these sessions did the prompts

facilitate more than an 18% effective rate in meaningful changes

for middle school or college students. Analysis now turns to the

specific prompts that facilitated positive change.

Specific Prompt Productivity

In analyzing specific prompt statistics, the main category of

28
prompts that distinguished college from middle school writers was

fluency, as the younger writers obtained a 17% efficiency rating

compared to only 4% for the college students (see Table 5). The

relatively low college student productivity rate for fluency

prompts approached significance compared to the overall prompt

effectiveness rate of 14 percent (t1,11 = -2.02, p < .07; ns). More

specifically, the younger students found two prompts from the

fluency category more fruitful than the college students; prompt

"A" (i.e., exaggerate and contrast ideas), and prompt "Z" (i.e.,

what else might the audience want to know) (once again, see Table

1 for complete prompt descriptions). In regards to the evaluative

dimension, both the relevancy and conclusions categories favored

the college writers (Relevancy: 14% for college students versus 5%

for middle school; Conclusions: 18% versus 10% productivity rates,

respectively). Across all students, the most productive

categories of prompts were logic (22%), elaboration (18%),

originality (15%), fluency (14%), and conclusions (14%). Three

categories were subpar (below the mean): assumptions/bias (4%),

relevancy (8%), and flexibility (11%). In fact, a t-test revealed

that middle school students' prompt productivity for the relevancy

category was significantly lower than the 14 percent overall

prompt productivity rate (t1,30 = -2.16, p < .04). In addition,

both middle school and college students productive use of

assumptions/bias prompts was significantly lower than average

(Middle school: t1,29 = -2.97, p < .006; College: t1,11 = -2.31, p <

29
.04).

-----------------------------------------------------------------

Insert Table 5 about here

-----------------------------------------------------------------

Prompt effectiveness ranged from 0% to 30%. Of the eleven

prompts that reached or exceeded the mean effectiveness rate,

seven prompts (six of which were from the generative category)

were more productively used by the middle school students (i.e.,

the "A" (to exaggerate and contrast ideas), "Z" (what else might

the audience want to know), "X" (give some points to help the

reader understand), "C" (combining two ideas into something

unique), "R" (play with and expand last idea), "Y" (give a clear

example for the reader), and "N" (provide support to original idea

for paper)), while only one clearly favored the college students

(the "H" prompt which had students read the first and last

sentences of every paragraph and provide transitions; a 64%

effectiveness rate for college students but only 6% for middle

school students). Three prompts were equally productive for both

subject groups (i.e., "V" (include exceptions to what you are

saying), "T" (relate your ideas to the main topic), and "O"

(summarize this thought in 1-2 sentences) prompts), which all

address the focus or problem statement of the paper. The most

productive prompts for the middle school students were the "C" and

"R" prompts (described above and in Table 1); but both were

generative prompts with low probability of enhancing text cohesion

30
and integration.

The prompt effectiveness data uncovered prompts in need of

revamping, deletion, or further interactive testing (e.g., "S"

(imagine if everything written is wrong), "F" (reread last

paragraph and add a sentence if necessary), "G" (reread paper and

delete repeated or unnecessary sentences), "B" (imagine where

writing is headed and whether information is relevant), "U"

(reflect on sources of information and state your assumptions),

and "M" (provide support for ideas and values). In fact, even

though the college students were effective in using the evaluative

prompts, the three prompts that appeared to be totally

unsuccessful across groups were the following evaluative prompts:

"G," "U," and "M" (see above descriptions or Table 1).

Discussion

Variations in prompt effectiveness between middle school and

college students and within the three middle school grades

themselves can have a number of origins. Before reflecting on

these, limitations of the comparisons reported above must be

addressed. As alluded to earlier, in accordance with Vygotskian

theory, through pilot testing the prompts were scaffolded to fall

within the student's zone of proximal development. As a result of

this scaffolding, the wording of the college prompts and prompt

categories varied slightly from the middle school prompts, though

the following three prompts differed fairly significantly between

these two main age groups: "S" (Middle School: (briefly restated)

31
"imagine if everything written is wrong;" College: "draw upon the

unique perspective you have"); "M" (Middle School: "provide

support for ideas and values;" College: "try to illuminate buried

assumptions and present sufficient evidence); and "G" (Middle

School: "reread paper and delete repeated or unnecessary

sentences;" College: "limit your conclusions to what is justified

by the evidence"). Considering the focus of this comparison,

these differences were considered insignificant.

Some performance discrepancies also may have occurred due to

the global nature of some of the prompts. Consequently, the

seemingly useless prompts may simply have been too vaguely worded

to code ensuing textual changes/reprocessing behaviors as

resulting directly from the prompt. Furthermore, latency effects

of some of the prompts may have masked prompt effectiveness.

Prompt usage and effectiveness may have differed due to task

variation between the middle school and college students. In

addition, the college students had the advantage of 15 minute

longer composing sessions and a different expository task that

might have influenced their prompt selection style and

effectiveness. Finally, the varied environments (elementary

versus college computer labs) of these groups certainly affected

the atmosphere and writing behaviors within the actual composing

sessions. All these qualifications combine to affect the

conclusions drawn below.

During this review of procedural facilitation in writing, a

32
number of questions and concerns regarding previous and current

research efforts surfaced. The intent of the title was twofold:

to indicate that most word processors can be repurposed to

incorporate prompting questions and procedural facilitation, and

also to imply that unexpected results in this computerized

prompting study prompted many theoretical and methodological

questions. Some of the most critical questions and plausible

answers are detailed below.

First of all, why weren't the questions internalized by the

middle school students? Though there is no simple answer here,

the middle school students cannot be expected to internalize 25

prompts when they selected approximately 16 on each of three

papers (8 per session) and then actually incorporated the spirit

of a mere sixth of those within their papers. Thus, there was

limited sharing between the student and the more capable peer (the

computer) on the social plane, thereby limiting possibilities that

the skill would be internalized as independent individual

activity. Not only does internalization take longer, but new

zones of proximal development barely had a chance to form. Lack

of zone interaction between students and prompts may be due

misperceptions of otherwise well-intentioned instructional

strategies and other affective filtering by the learner (e.g.,

prompted assistance denoting low ability and failure expectations)

(Graham & Barker, 1990). Thus, student requested or solicited

prompting in this study might indicate skill deficiencies, hence,

33
unfavorably impacting prompt use. In fact, when standardized

prompts are made available for a specific task, it is difficult to

imply that their use is purely learner initiated or solicited;

someone, at least, implicitly signaled that certain students might

need assistance more than others.

The prompting questions that the younger students chose (mostly

generative prompts), were selected to complete a "knowledge

telling" task, not to help mindfully grasp and detail overall

textual themes. Third, there was no guarantee that the prompts

incorporated into this study operated within students' zones of

proximal development. Just how is this determined? And, how do

the prompts vary based on ability and need?

The summary of prompt usage in Table's 2-5 of this study point

to the few "macroprompts" (i.e., prompts effective across groups)

that actually were operating within both middle school and college

level zones of proximal development, though most likely differing

within particular zones. Many other prompts were mildly effective

for only one of these age groups. At the same time, other rarely

used prompts may have been particularly facilitative in the

ensuing months due to the combined effects of the prompt templates

and think sheet scaffolds, regardless of actual limited prompt

usage. And, as indicated, such latency effects would be extremely

difficult to identify and code.

The next issue concerns how prompt usage and internalization

validated the notion of writing as reprocessing and the

34
generative-evaluative model of writing. As evident, the prompts

were not internalized according to the measurement instruments

used. However, internalization is not a recopying process, but

involves reconstruction of the activity according to one's

perceived needs. Thus, the prompt sort task and post-test of

expository writing reported elsewhere (see Bonk & Reynolds, in

press) may not have uncovered any self-regulatory changes. Other

qualitative tracking instruments, such as keystroke mapping, did,

in fact support the model by uncovering strategic changes in both

surface level and textual scrolling behavior of middle school

students and in meaningful changes of college students as a result

of prompted support.

The generative-evaluative model claims that younger writers

focus almost exclusively on lower-order aspects of the writing

process. Keystroke mapping revealed that this is exactly what

occurred; middle school students focused more on format, spelling,

and grammar than did college students (Reynolds & Bonk, 1990b).

In addition, less successful writers lack the executive switching

mechanism between generative and evaluative processes. The data

presented here confirm the notion of the "knowledge teller" who,

lacking in social-cognitive ability (Bonk, 1990) and higher-order

thinking and self-regulatory strategies (Flower & Hayes, 1981),

focuses almost exclusively on text generation, without reworking

her ideas into a coherent theme or addressing audience concerns.

Middle school students failed to shift their writing strategies

35
from predominantly text generation quests in session one to text

evaluation in session two as the more successful college students

had done. For the middle school students, the prompts continued

to act as a funnel for more things to write (Kellogg, 1989;

Bereiter & Scardamalia, 1987), not as a strategic writing partner.

Younger writers did attend less to lower-level aspects of

revision when prompted, but failed to carry out the diagnostic

processes suggested by the prompts. They moved up the writing

hierarchy temporarily, only to, once again, downslide to

spontaneous knowledge telling. In effect, they may have needed

additional explicit modeling of how to strategically operate on

one's text according to these heuristics. Alternatively, it is

conceivable that the prompt instruction fell too far ahead of

middle school students' completed developmental levels; their

zones did not extend to the boundaries of the prompts. Because

our graduated prompting aids did not induce development, perhaps

these middle school children were not developmentally ready to

perform at higher levels of composing; or perhaps, the prompts did

not meet the needs of this age group. As suggested by the model,

however, procedural prompts did refocus both middle school and

college level writers' attention on strategic aspects of

composing.

Besides the revisionary changes that occurred with the prompts,

the generative-evaluative model received support from the prompt

selection patterns of both the middle school and college students.

36
As stated in the model, though both planning and revising involve

a combination of generating and evaluating ideas, the emphasis is

on generating ideas during composition planning and shifts to

evaluation them during later revisionary sessions (especially for

college students--see Figure 7). It was evident from the replay

of keystrokes that students typically employed generative prompts

during the first draft of their papers (though mixing in some

evaluative prompts--about one of three prompt invocations), while

moving toward greater evaluative prompt dependency during the

second draft.

-----------------------------------------------------------------

Insert Figure 7 about here

------------------------------------------------------------------

The continued heavy use of generative prompts by the middle

school students during the second session indicated that students

either: (1) lacked enough time to plan and generate sufficiently

organized and integrated ideas during the first session; (2)

failed to understand the strategic utility of the prompts; (3)

used the prompts to stimulate preexisting "knowledge telling"

processes, not "knowledge transformation;" or (4) were affected by

a combination of the above. Importantly, though, the data verify

a critical component of the generative-evaluative model: students

can, at times, successfully use evaluative/revisionary strategies

while planning, and, alternately, rely on generative heuristics

while revising. We also argue that though young and immature

37
writers utilize both evaluative and generative prompting for some

textual production and restructuring, they rely more on

generative, "knowledge telling" heuristics throughout the

composing process (see Figure 8). Therefore, as many writing

theorists argue (Bereiter & Scardamalia, 1987; Bruce, Collins, &

Rubin, 1982; Caccamise, 1987; DiPardo & Freedman, 1988; Flower &

Hayes, 1981; Matsuhashi, 1987) writing cannot be reduced to a

strict linear sequence of steps or stages; students need to

constantly revise ideas even pretextual or pre-process ones, and

they also need to explore the boundaries of their ideas while

revising or performing post-process operations. In summary, then,

a more inclusive model of procedural facilitation in writing was

tested and preliminarily validated; a generative-evaluative model.

-----------------------------------------------------------------

Insert Figure 8 about here

-----------------------------------------------------------------

A related question concerns why the older students perform

better with the prompts. As evident in their responses to the

attitude questionnaire, the older students felt that the computer

prompts were useful adjunct aids, though they already had internal

prompting procedures in place. Perhaps the success of the

evaluative prompts with the college students indicates that

strategies to check and monitor the logical flow, relevancy,

cohesion, assumptions, and resulting conclusions were still

undergoing internalization for these students. In parallel to

38
recent reading comprehension research, questions asked during

knowledge acquisition (i.e., knowledge construction and generative

questions) may be a less effective tool than question-asking

behaviors salient during knowledge implementation and regulation

(i.e., evaluation questions) (Fishbein, Eckart, Lauver, Van

Leeuwen, & Langmeyer, 1990). The ineffectiveness of question-

asking for younger subjects may be attributable to the difference

between the young writer's actual and perceived cohesion failures

which the prompt questions only gradually overcome. In contrast,

the effectiveness of these prompts for the older students also

might suggest that these prompts were already internalized, but

were not yet automatic enough to operate without strategic

guidance.

The younger and less able writers may need additional modeling

and one-on-one tutoring of prompt question usefulness and more

concrete subject- or task-specific prompting strategies to monitor

their work, while encouraging greater understanding of their

intent (Pokay & Blumenfeld, 1990). As indicated, the younger and

less able students may simply require increased exposure to the

prompts and begin to realize that soliciting help does not signal

low-ability or failure (Graham & Barker, 1990). In fact, optimism

regarding speeded internalization of strategies is often

misguided, though, as mentioned earlier, refocusing the writers'

attention on compositional problems they would not ordinarily

consider was the initial goal of procedural facilitation

39
interventions (Bereiter & Scardamalia, 1987). Though some

meaningful revisions or additions were detected that may not have

occurred without the prompts, the typical use of 15 or 16 prompts

per paper with a hit rate of 15% would result in only 2.4 induced

revisions (see figures on prompt productivity presented earlier).

Not much of a chance for significant impact on paper quality.

In terms of the computerized instructional design component of

this study and Vygotskian theory, can a computer be a good

generative-evaluative thinking skill model and collaborative

partner? The lack of positive effects of the prompts with the

middle school students should not discount their utility. The

novel prompting and keystroke mapping environment did not inhibit

performance. In fact, the middle school students in the treatment

group were more positive about the computer as a collaborative

partner (a co-processor) than the control group, and produced

qualitatively equivalent papers under these novel cognitive and

perceptual environmental modifications. For at least six weeks

(the amount of time the prompts, templates, and think sheets were

available), an attempt was made to shift these young writers from

mere self- and word-processing focuses toward knowledge- and co-

processing concerns.

A mixture of live modeling of the generative-evaluative

processes in writing and keystroke replays of successful writers

using the prompts may facilitate additional internalization by

novice writers. Social exchange and dialogue, overtly displaying

40
metacognitive knowledge of writing, may nurture appreciation of

the value and purpose of the heuristics and involve students in

more mature discussions and solution strategies. In accordance

with Vygotskian theory, internalization occurs when patterns of

social activity first performed externally are executed on an

internal plane (Wertsch, 1985a; 1985b). Perhaps, the supplemental

modeling and pedagogically sound instructional explication of the

prompt strategies would foster greater internalization.

A pragmatic question regarding other procedures that might be

added to this environment to facilitate internalization pervaded

throughout this study. As indicated, additional peer and teacher

instructional supports such as modeling and dialogue would affect

prompt usage and productivity. For instance, students might

actively create prompts and prompting categories and later model

prompt usage for one another. In addition, teachers might link

the prompts created to other classroom lessons and activities. In

fact, the objectives of the prompts should be introduced prior to

classroom use. Other procedural facilitation, planning, and

revising tools available to support composition activities also

might increase the possibility of prompt internalization.

Teachers and researchers must search for ways to extend

existing software for metacognitive possibilities like the

WordPerfect macros used to create the prompts and keystroke maps

of this study. In addition, educators must consider new software

package announcements from a reprocessing environment perspective,

41
or, more specifically, whether the tool fosters self-processing,

knowledge/idea/language-processing, co-processing, as well as word

processing operations.

The word processor was a masterful accomplishment for the lower

level aspects of writing; now the road to ideal writing

environments becomes more arduous and varied. Figure 1 points

toward higher-order generative-evaluative expert writing concerns

that might steady our forward course. Procedural facilitation is

just one tool for refocusing the writer on executive processes and

strategies.

Finally, one should ask about other directions should research

in this area take. We feel that individual case studies may

uncover the interaction patterns and approximate time requirements

to internalize the most facilitative "macroprompts." A

combination of videotape records of writing behaviors, verbal

protocols, keystroke mapping, and interviewing during the replay

of videotapes and keystrokes may disclose microgenetic

developmental processes in effect immediately prior to and after

prompt internalization. Students who observed and analyzed their

own keystroke replays and also those of more expert writers might

be induced to incorporate higher-order writing strategies absent

from their internal repertoire. Peer feedback, comparison, and

sharing of perspectives regarding what is transpiring during the

replay of someone's paper might foster the strategic gains

desired. All of these interactive techniques may increase the

42
perceived value of computerized prompting, which may be a strong

predictor of strategy use (Pokay & Blumenfeld, 1990).

Of course, additional investigations are needed to uncover why

prompts may be more useful at certain grades, ability levels, and

moments in the composing process. Several other studies might be

undertaken to increase our understanding of the effectiveness of

various types of prompts within writing. A comparison or

combination of prompt types (e.g., topic specific versus general

prompts) might point to critical needs at different age and

ability levels.

Summary Reflections

It is difficult to speculate or even attempt to predict the

potential utility regarding procedural facilitation and keystroke

mapping tools. As evident in this review, regardless of the

soundness of theoretical rationale, new computer-based tools will

not prove equally effective across grade and ability levels.

Older students and experienced writers may benefit more from these

crude writing partnerships simply because they have more writing

skill and resources to augment computer-based facilitating writing

environments. Still, new models of writing will need to be

developed, tested, and validated, possibly modeled after the

generative-evaluative reprocessing and keystroke mapping framework

highlighted in this article. As a result of efforts already

underway, not only has the word processor been repurposed and

extended for cognitive enhancement and writing behavior tracking,

43
but quests into the psychology of writing have been further

informed.

44
References

Adams, D., Carlson, H., & Hamm, M. (1990). Cooperative learning


and educational media: Collaborating with
technology and each other. Englewood Cliffs, NJ: Educational
Technology Publications.

Bailey, T. (1991). Jobs of the future and the education they will
require: Evidence from occupational
forecasts. Educational Researcher, 20(2), 11-20.

Bereiter, C. (1980). Development in writing. In L. W. Gregg, & E.


R. Steinberg (Eds.), Cognitive processes in
writing. Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.

Bereiter, C., & Scardamalia, M. (1982). From conversation to


composition: The role of instruction in a
developmental process. In R. Glaser (Ed.), Advances in
Instructional Psychology (Vol. 2, pp. 1-64). Hillsdale, NJ:
Lawrence Erlbaum Associates, Inc.

Bereiter, C., & Scardamalia, M. (1985). Cognitive coping


strategies and the problem of "inert knowledge." In
J. W. Segal, S. F. Chipman, & R. Glaser (Eds.), Thinking and
Learning Skills (Vol. 1, Research and Open Questions, pp. 65-
80). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.

Bereiter, C., & Scardamalia, M. (1987). The psychology of written


communication. Hillsdale, NJ: Lawrence
Erlbaum Associates, Inc.

Bonk, C. J. (1988). The effects of convergent and divergent


computer software on the development of children's
critical and creative thinking. Paper presented at the American
Educational Research Association annual convention, New Orleans,
LA. (ERIC Document Reproduction Service No. ED 296 715)

Bonk, C. J. (1989). The effects of generative and evaluative


computerized prompting strategies on the
development of children's writing awareness and performance.
Dissertation Abstracts International, 51-03, 789A. (University
Microfilms No. 89-23, 313).

Bonk, C. J. (1990). A synthesis of social-cognition and writing

45
research. Written Communication, 7(1), 136-163.

Bonk, C. J. (1991). The emergence of cooperative reading:


Analyzing components of successful programs and
strategies. Paper presented at the American Educational Research
Association annual convention, Chicago, IL.

Bonk, C. J., Middleton, J. H., Reynolds, T. H., Stead, F. L.


(1991). The Index of Writing Awareness: One
measure of early adolescent metacognitive ability in writing.
Paper presented at the American Educational Research Association
annual convention, Chicago, IL.

Bonk, C. J., Paige, J. H., & Jones, D. F. (in press). Expanding


Project Y.E.S. through creation of a learning
center complex and computer-based social interaction lab. River
East School Division Journal, 3(1).

Bonk, C. J., & Reynolds, T. H. (in press). Early adolescent


composing within a generative-evaluative
computerized prompting framework. Computers in Human behavior,
Special Issue on Computers and Writing, June 1991.

Bridwell-Bowles, L., Johnson, P., & Brehe, S. (1987). Composing


and computers: Case studies of experienced
writer. In A. Matsuhashi, (Ed.), Writing in real time (pp. 81-
107). Norwood, NJ: Ablex.

Bridwell, L. S., Nancarrow, P. R., & Ross, D. (1984). The writing


process and the writing machine: Current
research on word processors relevant to the teaching of
composition. In A. Beach, & L. S. Bridwell (Eds.), New
directions in composition research. New York, NY: The Guilford
Press.

Bridwell, L., Sirc, G., & Brooke, R. (1985). Revising and


computing: Case studies of student writers. In S. W.
Freedman (Ed.), The acquisition of written language (pp. 172-194).
Norwood, NJ: Ablex Publishing Company.

Brown, A. L., & Palincsar, A. S. (1989). Guided, cooperative


learning and individual knowledge acquisition. In
L. B. Resnick (Ed.), Knowing, learning, and instruction: Essays in
honor of Robert Glaser. Hillsdale, NJ: Erlbaum.

46
Brown, J. S., Collins, A., & Duguid, P. (1988). Cognitive
apprenticeship, situated cognition, and social
interaction. Bolt, Beranek, and Newman, Inc., Technical Report
No. 6886.

Brown, J. S., Collins, A., & Duguid, P. (1989). Situated


cognition and the culture of learning. Educational
Researcher, 18(1), 32-41.

Brown, A. L., & Ferrara, R. A. (1985). Diagnosing zones of


proximal development. In J. V. Wertsch (Ed.),
Cultures, communication, and cognition: Vygotskian perspectives
(273-305). New York, NY: Cambridge University Press.

Bruce, B., Collins, A., Rubin, A. D., & Gentner, D. (1982). Three
perspectives on writing. Educational
Psychologist, 17(3), 131-145.

Caccamise, D. J. (1987). Idea generation in writing. In A


Matsuhashi (Ed.), Writing in real time: Modeling
production processes (Chapter 9, pp. 224-253). Norwood, NJ: Ablex.

Carey, L. (1988). Pushing planning to the limit: Using computer


based prompts to elicit writers' plans. Paper
presented at the American Educational Research Association annual
convention, New Orleans, LA.

Daiute, C. (1985). Do writers talk to themselves? In S. W.


Freedman (Ed.), The acquisition of written language:
Response and revision (pp. 133-159). Norwood, NJ: Ablex
Publishing Corporation.

Daiute, C. (1986a). Do 1 and 1 make 2? Patterns of influence by


collaborative partners. Written Communication,
3(3), 382-408.

Daiute, C. A. (1986b). Physical and cognitive factors in revising:


Insights from studies in computers. Research
in the Teaching of English, 20(2), 141-159.

Daiute, C., & Kruidenier, J. (1985). A self-questioning strategy


to increase young writers' revisionary processes.
Applied Psycholinguistics, 6, 307-318.

47
Davis, G. A. (1986). Creativity is forever. (2nd ed.). Dubuque,
IA: Kendall/Hunt Publishing Company

DeVillar, R. A., & Faltis, C. J. (1991). Computers and cultural


diversity: Restructuring for school success.
Albany: NY: State University of New York Press.

Dickinson, D. K. (1986). Cooperation, collaboration, and the


computer: Integrating the computer into a first
second grade writing program. Research in the Teaching of
English, 20(4), 357-378.

DiPardo, A., & Freedman, S. W. (1988). Peer response groups in the


writing classroom: Theoretical foundations
and new directions. Review of Educational Research, 58(2), 119-
149.

Englert, C. S., Raphael, T. E., Fear, K. L., & Anderson, L. M.


(1988). Students' metacognitive knowledge about
how to write information texts. Learning Disability Quarterly,
11, 18-46.

Feigenbaum, E. A., & McCorduck, P. (1984). The fifth generation:


Artificial intelligence and Japan's computer
challenge to the world. New York, NY: Signet.

Fishbein, H. D., Eckart, T. Lauver, E., Van Leeuwen, R., &


Langmeyer, D. (1990). Learner's questions and
comprehension in a tutoring session. Journal of Educational
Psychology, 82(1), 163-170.

Fitzgerald, J., & Markham, L. R. (1987). Teaching children about


revision in writing. Cognition and Instruction,
4(1), 3-24.

Flower, L., & Hayes, J. R. (1981). A cognitive process theory of


writing. College Composition and
Communication, 32, 365-387.

Flower, L., Hayes, J. R., Carey, L., Schriver, K., & Stratman, J.
(1986). Detection, diagnosis, and the strategies
of revision. College Composition and Communication, 37(1), 16-55.

48
Freedman, S. W. (1987). Response to student writing. Urbana, IL:
National Council of Teachers of English.

Friedman, M., & Rand, E. (1989). A computer-based writing aid for


students: Present and future. In B. K.
Britton & S. M. Glynn (Eds.), Computer writing environments:
Theory, research, and design (pp. 129-141). Hillsdale, NJ:
Erlbaum.

Garner, R., Gillingham, M. G., & White, C. S. (1989). Effects of


"seductive details" on macroprocessing and
microprocessing in adults and children. Cognition and
Instruction, 6(1), 41-57.

Glynn, S. M., Oaks, D. R., Mattocks, L. F., & Britton, B. K.


(1989). Computer environments for managing
writers' thinking processes. In B. K. Britton & S. M. Glynn
(Eds.), Computer writing environments: Theory, research, and
design (pp. 1-16). Hillsdale, NJ: Erlbaum.

Graham, S., & Barker, G. P. (1990). The down-side of help: An


Attributional-developmental analysis of helping
behavior as a low-ability cue. Journal of Educational Psychology,
82(1), 7-14.

Graves, A., Montague, M., & Wong, Y. (1990). The effects of


procedural facilitation on the story composition
of learning disabled students. Learning Disabilities Research,
5(2), 88-93.

Hayes, J. R., & Flower, L. S. (1986). Writing research and the


writer. American Psychologist, 1106-1113.

Higgins, L., Flower, L., & Petraglia, J. (1990). Planning text


together: The role of critical reflections in student
collaboration. Paper presented at the American Educational
Research Association annual convention, Boston, MA.

Inhelder, B., & Piaget, J. (1958). The growth of logical thinking:


From childhood to adolescence. New York,
NY: Basic Books, Inc.

Iran-Nejad, A. (1990). Active and dynamic self-regulation of


learning processes. Review of Educational

49
Research, 60(4), 573-602.

Iran-Nejad, A., McKeachie, W. J., & Berliner, D. C. (1990). The


multisource nature of learning: An introduction.
Review of Educational Research, 60(4), 573-602.

Isaacson, S., & Mattoon, C. B. (1990). The effects of goal


constraints on the writing performance of urban
learning disabled students. Learning Disabilities Research, 5(2),
94-99.

Isaksen, S. G., & Parnes, S. J. (1985). Curriculum planning for


creative thinking and problem solving. The
Journal of Creative Behavior, 19(1), 1-29.

Kellogg, R. T. (1989). Idea processors: Computer aids for planning


and composing text. In B. K. Britton & S.
M. Glynn (Eds.), Computer writing environments: Theory, research,
and design (pp. 57-92). Hillsdale, NJ: Erlbaum.

Kintsch, E. (1990) Macroprocesses and microprocesses in the


development of summarization skill. Cognition
and Instruction, 7(3), 161-195.

Kintsch, W., & van Dijk, T. A. (1978). Toward a model of text


comprehension and production. Psychological
Review, 85, 363-394.

Kozma, R. B. (1987). The implications of cognitive psychology for


computer based learning tools. Educational
Technology, 27(11), 20-25.

Kozma, R. B. (in press). The impact of computer-based tools and


embedded prompts on writing processes and
products of novice and advanced college writers. Cognition and
Instruction.

Levin, J. A., Riel, M. M., Rowe, R. D., & Boruta, M. J. (1985).


Muktuk meets jacuzzi: Computer networks and
elementary school writers. In S. W. Freedman (Ed.), The
acquisition of written language: Response and revision (pp. 160-
171). Norwood, NJ: Ablex Publishing Corporation.

Levin, J. R. (1982). Pictures as prose-learning devices. In A.

50
Flammer & W. Kintsch (Eds.), Tutorials in text
processing. Amsterdam, Netherlands: North-Holland.

Matsuhashi, A. (1987). Revising the plan and altering the text.


In A. Matsuhashi (Ed.), Writing in real time:
Modeling production processes (Chapter 8, pp. 197-223). Norwood,
NJ: Ablex.

Matsuhashi, A., & Gordon, E. (1985). Revision, addition, and the


power of unseen text. The acquisition of
written language: Response and revision (Chapter 12, pp. 226-249).
Norwood, NJ: Ablex.

Montague, M. (1990). Computers, cognition, and writing


instruction. Albany, NY: State University of New York
Press.

Neuwirth, C., Kaufer, D., Chimera, R., & Gillespie, T. (1987). The
notes program: A hypertext application for
writing from source texts. Paper presented at Hypertext '87,
Chapel Hill, NC.

Palincsar, A. S. (1985). The unpacking of a multi-component,


metacognitive training package. Paper presented
at the American Educational Research Association annual
convention, Chicago, IL.

Palincsar, A. S. (1986). The role of dialogue in providing


scaffolded instruction. Educational Psychologist, 21(1
& 2), 73-98.

Palincsar, A. S., & Brown, D. A. (1987). Enhancing instructional


time through attention to metacognition.
Journal of Learning Disabilities, 20(2), 66-75.

Paris, S. G. (1988). Fusing skill and will in children's learning


and schooling. Paper presented at the American
Educational research Association annual convention, New Orleans,
LA.

Purves, A. C. (1985). Framework for scoring: GRE/TOEFL.


Unpublished manuscript, University of Illinois at
Urbana-Champaign, Curriculum Laboratory.

51
Pokay, P., & Blumenfeld, P. C. (1990). Predicting achievement
early and late in the semester: The role of
motivation and use of strategies. Journal of Educational
Psychology, 82(1), 41-50.

Reynolds, T. H., & Bonk, C. J. (1990a). Facilitating college


writers' revisionary processes. Paper presented at
the American Educational Research Association annual convention,
Boston, MA.

Reynolds, T. H., & Bonk, C. J. (1990b). Windows on writing: The


usefulness of keystroke mapping to monitor
writing progress. Paper presented at the 6th Computers and
Writing Conference, Austin, TX.

Ruth, L., & Murphy, S. (1988). Designing writing tasks for the
assignment of writing. Norwood, NJ: Ablex
Publishing Company.

Salomon, G. (1988). AI in reverse: Computer tools that turn


cognitive. Journal of Educational Computing
Research, 4(2), 123-139.

Scardamalia, M., & Bereiter, C. (1982). Assimilative processes in


composition planning. Educational
Psychologist, 17(3), 165-171.

Scardamalia, M., & Bereiter, C. (1985). Fostering the development


of self-regulation in children's knowledge
processing. In J. W. Segal, S. F. Chipman, & R. Glaser (Eds.),
Thinking and learning skills (Vol. 1, Research and open
questions). Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.

Scardamalia, M., & Bereiter, C. (1986). Research on written


composition. In M. C. Wittrock (Ed.), Handbook
of research on teaching. (3rd edition, pp. 778-803). New York:
Macmillan Education Ltd.

Scardamalia, M., Bereiter, C., & Steinbach, R. (1984).


Teachability of reflective processes in written composition.
Cognitive Science, 8, 173-190.

Scardamalia, M., Bereiter, C., & Goelman, H. (1982). The role of


production factors in writing ability. In M.

52
Nystrand (Ed.), What writer's know: The language, process, and
structure of written discourse (173-210). London: Wiley.

Scardamalia, M., Bereiter, C., McClean, R. S., Swallow, J., &


Woodruff, E. (1989). Computer-supported
intentional learning environments. Journal of Educational
Computing Research, 5(1), 51-68.

Schoenfeld, A. H. (1988). Mathematics, technology, and higher


order thinking. In R. S. Nickerson, & P. P.
Zodhiates (Eds.), Technology in education: Looking toward 2020.
Hillsdale, NJ: Erlbaum.

Smith, J. B., & Lansman, M. (1989). A cognitive basis for a


computer writing environment. In B. K. Britton &
S. M. Glynn (Eds.), Computer writing environments: Theory,
research, and design (pp. 17-56). Hillsdale, NJ: Erlbaum.

Swallow, J., Scardamalia, M., & Olivier, W. P. (1988).


Facilitating thinking skills through peer interaction with
software support. Paper presented at the AERA annual meeting, New
Orleans, LA.

Talbot, J. (1986). The assessment of critical thinking in


history/social science through writing. Social Studies
Review, 25(2), 33-41.

Tharp, R. G., & Gallimore, R. (1988). Rousing minds to life:


Teaching, learning, and schooling in social context.
New York: Cambridge University Press.

Toews, O. B. (1989). Challenge to learn: Building knowledge and


writing. River East School Division Journal,
1(1), 50-78.

Torrance, E. P. (1974). Norms and technical manual: Torrance tests


of creative thinking (rev. ed.). Bensenville,
IL: Scholastic Testing Service.

Turner, J. (1990). Image processing: A complement to text. The


computing teacher, 17(6), 24-26.

Vygotsky, L. S. (1978). Mind in society: The development of higher


psychological processes. (M. Cole, V. John-

53
Steiner, && E. Souberman, Eds. & Trans.). Cambridge, MA: Harvard
University Press.

Vygotsky, L. (1986). Thought and language (rev. ed.). Cambridge,


MA: The MIT Press.

Wertsch, J. V. (1985a). Adult-child interaction as a source of


self-regulation in children. In S. R. Yussen (Ed.),
The growth of reflection in children. Orlando, FL: Academic Press,
Inc.

Wertsch, J. V. (1985b). Vygotsky and the social formation of mind.


Cambridge, MA: Harvard University Press.

Witte, S. P. (1985). Revising, composition theory, and research


design. In S. Freedman (Ed.), The acquisition
of written language: Response and revision (pp. 250-284).
Norwood, NJ: Ablex.

Woodruff, E., Bereiter, C., & Scardamalia, M. (1981). On the road


to computer assisted composition. Journal
of Educational Technology Systems, 10(2), 133-148.

54
Figure 1

Model for Analyzing Writing Environments and Behaviors within a


Reprocessing Framework

55
Figure 2

Proposed Model of the Generative-Evaluative Processes in Composing


__________________________________________________________________
__

56
57
58
59
60
61
62
Table 1

Generative and Evaluative Prompt Listing (Note: this particular list was used with the middle school students)
______________________________________________________________________________________________________________

(Note: the letter displayed (e.g., "Q") is the where that particular prompt was located on the keyboard.)

Generative Prompts

Fluency (Middle school students new this as: MORE IDEAS):


Q:List all that you know about your topic in your head or on paper. You may want to jot down items in the list that are not in your paper.
A:Ask yourself: What other ideas does this suggest? What could I add here? And, how could I exaggerate or maybe say the opposite?
Z:What else might your audience want to know? Would the reader want to know about the smell, sight, sound, or touch of your object?

Flexibility (i.e., TYPES OF IDEAS):


W: Add other categories, models, examples, or lists. You might try to use pictures in your head to compare
points.
S:Just imagine if everything you've said so far is wrong. If the reader caught it, what changes might he/she suggest?
X:Think again about your reader. Are there other points of view that are necessary for your reader to understand?

Originality (i.e., NEW IDEAS):


E:Try out a wild idea or describe your last thought in a metaphor. How is a ____ like a ____???
D:Ask yourself "What if...?" and then reflect on what might happen to change your mind on this topic.
C:Try combining two or more of your ideas into something really unique. Have you used your creativity or imagination?

Elaboration (i.e., EXTENDERS):


R:Have some fun, play with the last idea, expand or extend it, and then maybe contrast it with something else.
F:Reread your last paragraph. Would expanding or adding a sentence help your reader understand?
V:Try to broaden the focus of your paper by including exceptions to what you are saying.

Evaluative Prompts

Relevancy (i.e., QUALITY):


T:Think about the problem or original topic. Is everything you've said needed or related to it?
G:Reread your paper and delete repeated or unneeded sentences which don't help form an overall theme.
B:Try to see or imagine where your writing is headed. Is the information you're providing good and also relevant?

Logic (i.e., CLEAR/LOGICAL):


Y:Give an example that might make your reasoning clearer to the reader. State all examples in clear and simple ways.
H:Read the first and last sentences to each paragraph. Are there transitions from one sentence to the next?
N:Think back about your original idea or opinion on this topic. What can you say now to provide support for your entire paper?

Assumptions (i.e., ASSUMING):


U:Reflect on the sources of your information. Are your sources and your assumptions stated as such in your paper?
J:Read over your paper for personal bias; look out for sentences where you say "I feel" or "I think" without backing them up.
M: Will your audience agree with your values, opinions, or ideas? If not, list something that might help get your
point across.

Conclusions (i.e., CONCLUSIONS):

63
I:Have you provided enough information to back up your claims and conclusions? And are there other effects to what you're saying?
K:Are there different conclusions to what you are saying? Try to explain these so they make sense for the reader.
O:Can you summarize to the reader what you have said in one or two sentences? Try to do this at the end of each paragraph or idea.

OTHER PROMPTS (for whole paper):


L:Step back and look at your whole paper. Are your thoughts and ideas logically stated, justified, interesting, and unique?
P:(Clears the bottom screen)

64
Table 2. Overall Prompt Usage During Two Session Composing Effort
-----------------------------------------------------------------------------
Total Prompts Prompts/paper Prompts/Minute
Middle School:
Sixth (N = 10)178 17.8 .20
Seventh (N = 10) 138 13.8 .15
Eighth (N = 10) 150 15.0 .17
-----------------------------------------------
Total (N = 30) 466 15.5 .17

College (N = 12) 203 16.9 .14

65
Table 3. Total Prompts Selected by Session and Resulting Effectiveness
-------------------------------------------------------------------------------------

Number of Prompts Invoked Resulted In

Generative Evaluative Total % Productive


Middle School:
6th Session One 78 (74%) 27 (26%) 105 (59%) 15.2%
6th Session Two 40 (55) 33 (45) 73 (41) 17.8
Total 6th 118 (66) 60 (34) 178 16.3

7th Session One 43 (55%) 34 (44%) 77 (56%) 13.0%


7th Session Two 41 (67) 20 (33) 61 (44) 9.8
Total 7th 84 (61) 54 (39) 138 11.6

8th Session One 66 (78%) 19 (22%) 85 (57%) 14.1%


8th Session Two 42 (65) 23 (35) 65 (43) 12.3
Total 8th 108 (72) 42 (28) 150 13.3

6-8th Session One 187 (70%) 80 (30%) 267 (57%) 14.2%


(prompts/student) 6.2 2.7 8.9

6-8th Session Two 123 (62) 76 (38) 199 (43) 13.6


(prompts/student) 4.1 2.5 6.6

Total 6th-8th 310 (67) 156 (33) 466 13.9


(prompts/student) 10.3 5.2 15.5
-------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------
College:
Session One 44 (56%) 34 (44%) 78 (38%) 10.0%
(prompts/student) 3.7 2.8 6.5

Session Two 44 (35) 81 (65) 125 (62) 17.6


(prompts/student) 3.7 6.8 10.5

Total College 88 (43) 115 (57) 203 13.8


(prompts/student) 7.4 9.6 17.0
-------------------------------------------------------------------------------------

66
Table 4. Prompt Productiveness by Session and General Category
-------------------------------------------------------------------------------------

Percentage of Prompts Effective

Generative Evaluative Total


Middle School:
6th Session One 15.4% 14.8% 15.2%
6th Session Two 20.0 15.2 17.8
Total 6th 16.9 15.0 16.3

7th Session One 18.6% 6.5% 13.0%


7th Session Two 12.2 5.0 9.8
Total 7th 15.5 5.5 11.6

8th Session One 16.7% 5.2% 14.1%


8th Session Two 9.5 17.4 12.3
Total 8th 13.9 11.9 13.3

6-8th Session One 16.6% 8.8% 14.2%


6-8th Session Two 13.8 13.2 13.6
Total 6th-8th 15.5 10.9 13.9

Total # Effective: 38 generative prompts (58.5%)


27 evaluative prompts (41.5%)
==================================================================

College:

Session One 13.6% 5.9 10.0%


Session Two 4.5 24.7 17.6
Total College 9.1 19.1 13.8
Total # Effective: 8 generative prompts (26.6)
22 evaluative prompts (73.3)
===============================================================

67
Table 5. Individual and Categorical Prompt Effectiveness
--------------------------------------------------------------------------------------

Percentage of Prompts Effective

Middle School College Total


GENERATIVE PROMPTS:
Q List all you know 2/20 0/6 2/26 (8%)
A Exaggerate/contrast 6/30 0/10 6/40 (15%)
Z Audience want to know 5/25 1/10 6/35 (17%)
Total Fluency 13/75 (17%) 1/26 (4%) 14/101 (14%)

W Models, examples,pictures 2/22 1/11 3/33 (9%)


S Everything wrong & perspect 0/22 1/7 1/29 (3%)
X Reader might understand 5/23 1/8 6/31 (19%)
Total Flexibility 7/67 (10%) 3/26 (12%) 10/93 (11%)

E Wild idea, metaphor 3/26 1/8 4/34 (12%)


D Ask what if? 3/39 1/5 4/44 (9%)
C Combine two ideas 9/27 1/6 10/33 (30%)
Total Originality 15/92 (16%) 3/19 (16%) 18/121 (15%)

R Play with last idea 8/25 1/6 9/34 (26%)


F Reread & expand for reader 1/24 1/6 2/30 (6%)
V Include exceptions 3/16 3/17 6/33 (18%)
Total Elaboration 12/65 (18%) 5/29 (17%) 17/94 (18%)

EVALUATIVE PROMPTS:
T Relevant to topic 2/13 3/12 5/25 (20%)
G Delete unneeded sentences 0/18 0/9 0/27 (0%)
B Imagine where wtg headed 0/12 1/7 1/19 (5%)
Total Relevancy 2/43 (5%) 4/28 (14%) 6/71 (8%)

Y Clear examples 3/8 1/13 4/21 (19%)


H Reread for transitions 1/17 7/11 8/28 (29%)
N Link & support entire paper 4/15 0/10 4/25 (16%)
Total Logic 8/40 (20%) 8/34 (24%) 16/74 (22%)

U Sources and assumptions 0/9 0/9 0/18 (0%)


J Personal bias "I feel" 1/13 1/9 2/22 (9%)
M Provide support for ideas 0/10 0/7 0/17 (0%)
Total Assumptions 1/32 (3%) 1/25 (4%) 2/57 (4%)

I Back-up claims & implic 0/11 2/14 2/25 (8%)


K Different conclusions 1/13 2/13 3/26 (12%)
O Summarize in 1-2 sentences 3/15 3/11 6/26 (23%)
Total Conclusions 4/39 (10%) 7/38 (18%) 11/77 (14%)

68

También podría gustarte