IMPACTS OF STUDENT CATEGORIZATION PAGE 8 IMPACTS OF STUDENT

 THE NEARTERM IMPACTS OF CARBON MITIGATION POLICIES ON
Definitions 100 Series Avoid Adverse Impacts Means to
WORK INSTRUCTION ENVIRONMENTAL ASPECTS AND IMPACTS REF 2521

20 EVALUATION OF TRIPLE BOTTOM LINE IMPACTS OF RESOURCEFUL
2011 IFQRG923 COMPARISON OF POTENTIAL ENVIRONMENTAL IMPACTS OF MICROWAVE
a Review of Research Illustrating the Impacts of Digitization

Self-categorizing online discussions

Impacts of Student Categorization Page 8

Impacts of Student Categorization of their Online Discussion Contributions


Jim Flowers, Ph.D. & Samuel E. Cotton, Ph.D.

Professor & Director of Online Education, and Assistant Professor

Department of Industry and Technology

AT131 Ball State University, Muncie, IN 47306-0255

765-285-2879 FAX:765-285-2162 [email protected]



1/18/06

Selfms23.doc

Impacts of Student Categorization of their Online Discussion Contributions


This study examined the impacts of online graduate students categorizing their online discussion contributions according to an instructor’s criteria. Student messages before and after this categorizing activity were compared: unitized by terminal punctuation, then classified for function, skill, and level based on Henri and Rigault’s (1996) content analysis framework. There was an unexpected decrease in the quantity of messages following the treatment. Even more surprising was the unexpected decrease in four measures of quality of cognitive dialog: percent cognitive unit among four functions, percent inference and percent analysis or elaboration (or in-depth clarification) among five cognitive skills, and percent high level processing. This treatment seems to have stifled cognitive dialog. Qualitative data suggested that there may have been some advantages for some students.


Introduction

In online, text-based education, Transactional Distance, or “the gap of understanding and communication between teachers and learners caused by geographic distance” (Moore and Kearsley 2005 P.223) creates difficulties in communication due to restrictions on verbal and visual cues. Online dialog, or positive interactions aimed at increasing student understanding (Moore 1993), is a tool often used in education. Providing students with feedback on the quality of their dialog increases demand on instructors’ time and thus may be prohibitive. This study investigated the impact of an intervention that did not require individual instructor feedback, but instead entailed each student self-categorizing their previous online discussion contributions to determine whether this could be effective in improving the quantity and quality of students’ cognitive dialog.

Kanuka, Collett, and Caswell (2002) examined instructors’ perceptions of asynchronous discussions and noted that “most instructors continue to experience a tension between structure, dialogue, and autonomy” (151). Several themes regarding the technical, managerial, social, and pedagogical roles of an online instructor emerged from the interviews in their study, with only “feedback” identified under pedagogical roles: “It is important that instructors are skilled at providing feedback to students in a manner that is not only timely but also in ways that overcome the absence of paralinguistic cues” (156).

There was concern expressed by the instructors (both inexperienced and experienced) that it is sometimes difficult when using Internet communication tools to know when it is best to provide feedback to the group or the individual. (Kanuka, Collett, and Caswell 2002, 162)

Faculty time requirements per student have been found to be higher for distance courses than for face-to-face (Bender, Wood, and Vredevoogd 2004). Instructor time can be further limited when class size increases, and some seek strategies especially useful in larger online classes (Giguere, Formica, and Harding 2004). Yet, as Bender, Wood, and Vredevoogd (2004) noted, “Technology frees the faculty member from delivering fundamental and redundant information, which provides more opportunity for engaging students in extended discussions” (112).

Bonk, Kirkley, Hara, and Dennen (2001) identified four competing functions of instructors’ discussion contributions: pedagogical, managerial, social, and technological. Instructor feedback about how students should communicate in an online forum is managerial. By devoting more time to managerial issues, there is less time available for critical pedagogical issues and course content.

Roblyer and Wiencke (2003) examined the use of a rubric to both assess and encourage interactive discussions, noting that, “identifying and assessing observable indicators of interaction in distance courses is essential in order to encourage greater interaction and study its impact” (89). However, the students were assessing interactive qualities of a distance course, such as the interactivity of technological resources and evidence of instructor engagement, rather than their own dialog.

Reviewing one’s own contributions to an online asynchronous discussion may yield several advantages:

This self-review may be problematic. Duffy, Dueber and Hawley (1998) held that “critical thinking is an effortful, often difficult process” and suggested encouraging it “through the inclusion of specific tools related to inquiry” and supporting it “by focusing students on the process and structure of the inquiry” (64). They suggested that individuals “be reflective of the argument and their contribution to the argument in the action of making a contribution” (64.)

Moore (1989) identified three forms of interaction: between student and instructor, between students, and between a student and content. Reflecting on one’s contributions to a cognitive dialog is a metacognitive activity rather than an interpersonal interaction. It serves similar functions as these three forms of interaction if the person doing the reflecting is seen as different than when they initially contributed. Hannafin, et al., (2003) noted the greater independence of students in an online environment may create a greater need to engage in metacognitive activities, reflecting on their own cognition. Holmberg (2003) suggested a theory of distance education based on empathy: “Feelings of empathy and belonging promote the students’ motivation to learn and influence the learning favorably” (82.) While engaging in conversations with others is an obvious path toward empathy, individually reflecting on one’s own communications to others might help in seeing one’s contributions more nearly as others do, promoting empathy. Engaging in such critical thinking, though difficult, can be nurtured through practice with feedback on performance (van Gelder 2005).

Henri (1992) and Jeong (2003) suggested that the unit of analysis be the unit of meaning; this was employed by Rose (2002) but to address the concern of not adequately counting lengthy, complex messages, Rose and Flowers (2003) adopted the sentence (i.e., terminal punctuation) as the unit of analysis. A scheme for classifying dialog by function was suggested by Henri (1992) and expanded to describe classification of cognitive dialog by skill and processing level (Henri and Rigault 1999). This was adapted by other researchers (Rose 2002, Rose and Flowers 2003) examining dialog in online courses who classified the function of each unit as cognitive, metacognitve, organizational, or social, using a hierarchical approach that led to a cognitive/metacognitive unit to be classified as the higher function, cognitive. Cognitive units were then classified as inference, in-depth clarification, judgment, cognitive strategy, or elementary clarification, again using a hierarchal approach. Complexity or level of processing was then used to classify cognitive units as deep (high) or surface (low).


Problem


The purpose of this study was to determine the effects of having online graduate students engage in a self-categorization of their individual discussion contributions.


Research Questions

  1. Is there an increase in discussion quantity after self-categorization?

  2. Is there an increase in the percentage of cognitive dialog after self-categorization?

  3. What differences are revealed regarding cognitive skills (after a scheme based on Henri (1992) and Henri and Rigault (1996) and adapted by Rose and Flowers (2003)) after self-categorization, and in particular, is there an increase in the percentage of the higher order cognitive skills of analysis/elaboration and inference?

  4. Is there an increase in the percentage of high level units following self-categorization?

  5. What are students’ perceptions about the effect of self-categorization on discussion contributions?

  6. Is there evidence of increased self-awareness or resolutions for changed behavior based on self-categorization?


Methodology


This study occurred at a Midwest US state university with a growing, online, non-cohort master’s program in Career and Technical Education (CTE). Of the 26 students enrolled in a five-week online graduate course on Cooperative Education beginning in mid-May, 20 provided informed consent. Most subjects were Career and Technical (i.e., vocational) teachers. This course used Blackboard course management software and its asynchronous threaded discussion forums as the primary means of communication among students, although there were also lessons and readings on Webpages prepared by the instructor, and Email was used as a supplementary and optional form of communication. All forums were one week in duration, entailing online lecture materials, external readings, and assignments to participate in online discussion on specific topics.

During Week 3, each student was asked to fill out a “reflection activity form” based on their own discussions in the Week 1 forum providing counts of their: 1. total number of messages; 2. social messages; and 3. off-topic messages. Students counted messages of high, fair, and poor quality that corresponded to four categories previously presented by the instructor in a rubric for the evaluation of discussion participation: posing a relevant question; offering unsolicited input; offering deep responses; or offering shallow responses. In Week 4, students engaged in a post-treatment discussion on a new topic. A questionnaire was administered in Week 5.


Unitizing and Coding Messages


The unit of analysis for this study was the sentence. Participant’s messages in the Bb forums for Weeks 1, 2, and 4 were unitized according to terminal punctuation or hard return, excluding introductory names and signatures. For bulleted items separated by hard returns, the stem was unitized with the first item on the list, and each other item was a separate unit. Each direct quotation was counted as a single unit.

Each unit was coded according to function (Cognitive, Metacognitve, Organizational, Social). A hierarchy of coding would be used, classifying units that were both cognitive and metacognitive as cognitive, and so on, using the order listed above. Each cognitive unit was further coded as exhibiting one of five cognitive skills (Inference, Analysis or Elaboration, Judgment, Cognitive Strategy, Elementary Clarification). Units that fit multiple skills were coded using the hierarchy indicated by the order listed above, with inference being the highest. Each cognitive unit was coded according the level of complexity (Low, High). Direct quotations were coded as Cognitive, Elementary Clarification, and Low. Attachments were coded as Cognitive, Elementary Clarification, and High. This system was based on a content analysis structure (Henri 1992, Henri and Rigault 1996) that has been previously used by researchers in this department (Rose 2002; Rose and Flowers 2003), but with the category of “In-Depth Clarification” previously used in the literature renamed to “Analysis or Elaboration” to facilitate coders’ understanding of this area.

After a training period, two coders performed initial coding of messages, intercoder reliability was recorded (Table 1), and a third coder resolved disagreements. In areas where all three coders disagreed, the item was discussed to reach consensus.


Table 1

Intercoder reliability.

n(unit)a

Function

n(cog)b

Skill

Level

3018

88.0%

2489

61.8%

91.9%

a Units coded by Function



b Cognitive units coded by Skill and Level



Intercoder reliability for function was at 88%, largely due to the high percentage of cognitive units. However, the coding of cognitive units by skill was more problematic, achieving a reliability of only 61.8%. Since the skill categories were rather abstract, there seemed to have been more opportunities for differences in interpretation. Level, with only two choices and less abstract than skill, had a high reliability of 91.9%.


Results


Quantitative results were compared from all three pre-treatment student discussion forums (one in Week 1 and two in Week 2) to the post-treatment forum in Week 4 for all 20 (student) participants, retaining the paired nature of the data by participant. It was hypothesized that after conducting the specified self-categorization of one’s discussion contributions, there would be a tendency for future contributions to:

  1. be more numerous;

  2. have a higher percentage of cognitive units;

  3. show more complex cognitive processes, as evidenced by a higher percentage of inference and analysis among cognitive units; and

  4. have a higher percentage of deep level processing.

Contrary to the first hypothesis, the number of units (unitized by terminal punctuation, typically by sentence) per person per forum dropped during the study from 62.2, 35.7, and 48.3 in the pre-treatment forums (48.7 overall average) to 35.6 in the post-treatment forum.

For each of the twenty participants, their individual percentage of each function (i.e., cognitive, metacognitive, organizational, and social units) in each of the four forums was determined using a planned comparison. Results were combined for the three pre-treatment forums, and compared (on a per-forum basis) to the post-treatment forum by person (i.e., paired), using α = .05 for a two-tailed distribution.


Table 2
Test of Within-Subjects Contrasts for Percent Function of Post Treatment Forum vs. Each of Three Pre-Treatment Forums (df = 1,19)

Function

Pre

Post

Change

F

Sig

Cognitive

90% (43.1)

83% (29.6)

-7%*

6.643

0.018

Metacognitive

7% (4.1)

5% (2.8)

-2%

2.333

0.143

Organizational

2% (1.8)

5% (2.3)

3%*

11.25

0.003

Social

2% (2.1)

7% (3.3)

5%*

14.346

0.001


The percent cognitive decreased significantly from 90% to 83% after treatment (43.1 cognitive units per person per forum, dropping to 29.6 after treatment), which was a change in the opposite direction than had been hypothesized (Table 2). There was also a slight decrease in metacognitive units, but significant increases in organizational and social units. Each participant’s messages for any forum accounted for 100%, which was divided into these four functions that were not independent.

Cognitive units were further classified according to skill (i.e., analysis or elaboration, elementary clarification, cognitive strategy, inference, or judgment). The two skills at the top of the hierarchy, inference and analysis or elaboration, showed significant decreases after the treatment, again in direct contradiction of the hypothesized effect (Table 3). Significant increases occurred in the percent judgment and the percent cognitive strategy.


Table 3

Test of Within-Subjects Contrasts for Percent Skill of Post Treatment Forum vs. Each of Three Pre-Treatment Forums (df = 1,19)

Cognitive Skill

Pre

Post

Change

F

Sig

Inference

10% (4.1)

6% (2.3)

-4%*

4.41

0.049

Analysis or Elaboration

18% (6.7)

7% (2.7)

-11%*

84.678

<.001

Judgment

16% (7.3)

29% (8.5)

+13%*

18.402

<.001

Cognitive Strategy

1% (1.4)

5% (2.1)

+4%*

6.038

0.024

Elementary Clarification

55% (25.0)

54% (16.0)

-1%

0.688

0.417


It had been hypothesized that the percentage of high level units would increase. Again, the results showed precisely the opposite, a significant decrease from 13% in the pre-treatment forums to 8% in the post-treatment forum, as noted in Table 4.


Table 4

Test of Within-Subjects Contrasts for Percent Level of Post Treatment Forum vs. Each of Three Pre-Treatment Forums (df = 1,19)

Level

Pre

Post

Change

F

Sig

High

13% (5.0)

8% (2.5)

-5%*

9.004

0.007

Low

87% (38.3)

92% (27.1)

+5%



After analyzing their postings, students replied to an open ended item on the self-categorization instrument. These were reviewed for recurring themes as represented by the samples below. One value of this activity was raising awareness of the quality of one’s discussion participation. Multiple students felt their contributions were inadequate:

This led to statements of resolution from some participants.

There was also a call for more substantive analysis:

In contradiction to the quantitative findings, evidence suggested the activity was beneficial for some:

One item on a post-study questionnaire asked, “In what ways was the quality of your discussion different after you performed the Reflection Activity? That is, how were the comments you made in later forums different in quality from those before the Reflection Activity?” Some indicated that there was little change, others that the treatment seemed to increase quality or depth, and a third group indicated that the treatment might have diminished their contributions:

The perceived impacts of this activity on other’s contributions were also mentioned:


Discussion


The quantitative analysis showed there was a reduction in the volume of discussion, with 13.1 fewer units per person per forum after treatment. The decrease is likely due largely to the impact of self-categorization on one’s habits of participating in online discussion. This treatment may have been intimidating to many participants because of the increased awareness that others are evaluating the quality of their conversations: “… my horrific discovery of how poor my spelling is!” Other factors that may have contributed include:

Four indicators of discussion quality were found to significantly decrease after treatment. The percentage of posts that were cognitive in function was expected to increase due to the greater awareness of how one’s discussion meets cognitive criteria. The unexpected decrease (-7%) is likely due largely to increased self-consciousness from the treatment, but may have been influenced more by the ending of the participants’ academic year than the other factors noted.

Indicators of discussion quality among cognitive units included the percent of the higher order skills of inference and analysis/elaboration. After discovering significant decreases in inference (-4%) and analysis/elaboration (-11%) where increases had been expected, the nature of the tasks during the different periods was called into question to determine if one may have been structured to promote more or less inferencing or analysis/elaboration than the others. The topic of the fourth week (post treatment) included the evaluation of content and instructional strategies, possibly promoting more statements of judgment. There was a significant increase in the percent of cognitive units coded as judgment (+13%); it is likely that participants were feeling less of a need to explain the reasoning behind a conclusion. Similarly, the percent of cognitive units coded as high level, which was a final primary measure of discussion quality, showed an unexpected and significant decrease (5%); this was also likely due to factors previously mentioned.

Even though the quantitative data did not show desirable outcomes, qualitative data did indicate a perceived advantage of this treatment for some. This discrepancy is problematic, and indicates that students may have incorrect conceptions.

Problems with coding methods were previously reported in the literature:

  1. “Utterances are often ambiguous in meaning, making coding difficult or arbitrary.

  2. Utterances may have - indeed often have - multiple simultaneous functions, which is not recognized by most coding schemes which normally involve the assignment of utterances to mutually exclusive categories.

  3. The phenomena of interest to the investigator may be spread over several utterances, and so any scheme based on single utterances as the unit of analysis may not capture such phenomena.

  4. Meanings change and are renegotiated during the course of the ongoing conversation.” (Draper and Anderson (1991) cited in Mercer and Wegerif 1999.)

The present study illustrates some of the problems with existing coding structures. Many discussion points were difficult to place “cleanly” into single categories. Many could have qualified for multiple categories dependent on interpretation.


Conclusion


There is a need for new methods that would provide improved accuracy and reliability during categorization. Some information was lost during the unitization and coding of responses in the present study. More work is needed to develop a system to more effectively preserve the intent of messages while facilitating both ease and accuracy of analysis, possibly using larger unit size to preserve the author’s intent.

Online instructors who wish to improve the quantity and cognitive depth of online student discussions should not use a single activity where students self-categorize their own discussions thinking it will achieve the desired result. It may be necessary to provide students with additional tools, input, or time for practice that they would perceive as helpful in improving their participation in forums.


References


Bender, D. M., B. J. Wood, and J. D. Vredevoogd, J. D. 2004. Teaching time: Distance education versus classroom instruction. The American Journal of Distance Education 18 (2): 103-14.

Bonk, C. J., J. R. Kirkley, N. Hara, and N. Dennen, N. 2001. Finding the instructor in post-secondary online learning: Pedagogical, social, managerial, and technological locations. In Teaching and learning online: Pedagogies for new technologies, ed. J. Stephenson, 76-97. London: Kogan Page.

Duffy, T. M., B. Dueber, and C. L. Hawley. 1998. Critical thinking in a distributed environment: A pedagogical base for the design of conferencing systems. In Electronic collaborators, Learner-centered technologies for literacy, apprenticeship, and discourse, ed. C. J. Bonk and K. S. King, 51-78. Mahway, NJ: Erlbaum.

Giguere, P. J., S. W. Formica, and W. M. Harding. 2004. Large-scale interaction strategies for Web-based professional development. The American Journal of Distance Education 18 (4): 207-23.

Hannafin, M., K. Oliver, J. R. Hill, E. Glazer, and P. Sharma. 2003. Cognitive and learning factors in Web-based distance learning environments. In Handbook of distance education, eds. M. G. Moore & W. G. Anderson, 245-60. Mahwah, NJ: Erlbaum.

Henri, F. 1992. Computer conferencing and content analysis. In Collaborative learning through computer conferencing: The Najaden papers ed. A. R. Kaye, 117-36. New York: Springer-Verlag.

Henri, F., and C. R. Rigault. 1996. Collaborative distance learning and computer conferencing. In Advanced educational technology: Research issues and future potential. Vol. 128, Computer and Systems Sciences ed. C. O’Malley, 146-61. New York: Springer-Verlag.

Holmberg, B. 2003. A theory of distance education based on empathy. In Handbook of distance education, eds. M. G. Moore & W. G. Anderson, 279-86. Mahwah, NJ: Erlbaum.

Jeong, A. C. 2003. The sequential analysis of group interaction and critical thinking in online threaded discussions. The American Journal of Distance Education 17 (1): 25-43.

Kanuka, H., D. Collett, and C. Caswell. 2002. University instructor perceptions of the use of asynchronous text-based discussion in distance courses. The American Journal of Distance Education 16 (3): 151-67.

Mercer, N. and R. Wegerif. 1999. Is ‘exploratory talk’ productive talk? In Learning with computers: Analyzing productive interaction, eds. K. Littleton and P. Light, 79-101. New York: Routledge.

Moore. M. G. 1989. Three types of interaction. The American Journal of Distance Education 3 (2): 1-6.

Moore, M. G. 1993. Theory of transactional distance. In Theoretical principles of distance education, ed. D. Keegan, 22-39. London: Routledge.

Moore, M. G. and G. Kearsley. 2005. Distance Education: A Systems View. 2nd ed. Belmont, CA: Wadsworth Publishing.

Roblyer, M. D. and W. R. Wiencke. 2003. Design and use of a rubric to assess and encourage interactive qualities in distance courses. The American Journal of Distance Education 17 (2): 77-98.

Rose, M. A. 2002. Cognitive dialogue, interaction patterns, and perceptions of graduate students in an online conferencing environment under collaborative and cooperative structures. Ed.D. diss., Indiana University, Bloomington.

Rose, M. A., and J. Flowers. 2003. Assigning learning roles to promote critical discussions during problem-based learning. Paper presented at the 19th Annual Conference on Distance Teaching and Learning, Madison, WI.

Van Gelder, T. 2005. Teaching critical thinking. College Teaching 53 (1): 41-7.


ACT ON ASSESSMENT OF IMPACTS OF WORKS ON ENVIRONMENT
ANNEX I CHARACTERISTICS AND IMPACTS OF FOOD STAMP PROGRAMS
“IMPACTS OF CLIMATE CHANGE ON FOOD PRODUCTION IN WESTERN


Tags: impacts of, perceived impacts, student, impacts, categorization