Kinasevych, O. (2009). Student involvement in instructional improvement: Considerations and methods for total quality management and quality circles. Unpublished manuscript, Winnipeg, Canada.
The key focus of this paper will be to consider the role and participation of stakeholders in post-secondary education in implementing a quality circle approach for teaching and learning feedback. To arrive at that discussion, this paper will consider several shortcomings of existing methods of evaluation of teaching, particularly those of anonymous, summative student surveys often called student evaluation of instruction (Merritt, 2008). The paper will consider some of the pedagogical and administrative imperatives for feedback and some of the historical connections that led to current feedback methods. Pedagogical considerations are driven by research in teaching practice and in methodologies of teaching evaluation. The paper will illustrate some of the alternatives described in the literature, the degree of their success, and how they have been founded upon research. This will set the context for discussion of total quality management (TQM) and of quality circles, their origins within industrial settings, and their subsequent adoption within some quarters in post-secondary education. Several examples of implementations in post-secondary education will be described. A discussion will then consider how stakeholders in post-secondary education would participate in implementing and benefiting from quality circles as a means of improving teaching and learning. Finally, several recommendations on the basis of these considerations will be given.
Statement of the Problem
Current practice in gathering feedback of teaching behaviours is most widely carried out using a process of anonymous, summative surveys administered to learners at the conclusion of courses of instruction. Though it has been identified as an efficient means for gathering large collections of numeric data, many questions have been raised in the literature that debate its applicability to the improvement of teaching. The topic of student evaluation of instruction has been widely researched in education policy. Yet, there is no clear consensus despite the volume of data. Centra (2003) identified over 2,000 studies in the ERIC database on the subject of student evaluations of instruction, thus making it a more frequently researched subject than any other form of instructional evaluation. The volume of data and analysis that continues to beg scrutiny may perhaps suggest that the appropriate research questions have yet to be articulated (Merritt, 2008).
An area of contention in the literature regards the validity of student evaluations of instruction and their congruence to the teaching behaviours that are identified as beneficial to successful learning. Validity is meant to correspond the intent of measures to the implementation of the measuring instruments. That is, validity ought to be “the degree to which a test measures what it is intended to measure” (Gay, Mills, & Airasian, 2009, p. 608). In student evaluations of instruction, learners are asked to reflect on the teaching practices of their instructors and to identify the degree to which the practices parallel a proscribed set of behaviours that have been identified as good teaching. In this respect, the surveys have been identified as valid in numerous studies (Kanuka et al., 2009). Yet critics have challenged the student survey approach in that it may value expediency – omitting many crucial variables (Clayson & Sheffet, 2006) – over methods that may hold more true to the principles of adult education, consider a wider range of variables, and which may be better tied to learner success (Huemer, n.d.; Merritt, 2008).
For administrators in post-secondary education, efficiency and expediency are among the attractions of student evaluation of instruction (Crumbley & Fliedner, 2002; de los Santos & Finger, 1994). This may come at the price of compromise, as stated rhetorically by Arreola: “fast, fair, cheap – pick any two” (2006, p. xvi). Post-secondary administrators need some accountability for the allocation for their institutions’ resources. Faculty have a need for effective application of their teaching skills. Learners have an expectation to be provided with the environment, processes, and behaviours that are conducive to learning. For these needs and expectations to be met, modern management theory, in contrast to earlier industrial models, suggests that quantifiability may not offer the best reflection of the qualitative aspects of the desired traits of a learning organization. Student evaluation of instruction, relying as it does on efficiency and on quantifiable results, may respond only partially to these needs and expectations.
Alternatives to student surveys as instruments of instructional evaluation have been identified in the literature and supported in research. These take several forms, therefore post-secondary institutions have choices available to them. Most alternatives appear to be less straightforward or less speedy to implement as student surveys might otherwise be. Many of the alternatives may not only be challenging to implement but may also be alien to comprehend outside a strictly pedagogical scope. In several of these alternatives, such as peer review or mentoring, a certain amount of planning and training may be necessary to assure a consistent process. TQM and quality circles bring their own rigours to quality improvement processes. Quality circles may introduce some level of complexity and uncertainty and this can be seen as an obstacle to adoption of the methods. These activities would require additional resources and much responsibility for the processes would need to be allocated to faculty, moving some management power away from administrators. These challenges ought not be obstacles when the results offered by these alternatives are likely to improve teaching practice and learning success.
The work of Chickering and Gamson (1987) had described research-based principles of good practice in post-secondary education. In these principles, Chickering and Gamson encouraged contact between students and faculty, supported student interaction, endorsed active learning, noted the importance of prompt feedback, reinforced “time on task” (1987, p. 3), promoted the communication of high expectations, and advocated respect for diverse learning styles and abilities. These principles are grounded in earlier research in areas such as motivation, individualized instruction, and social learning. The research in teaching practice is relevant because it has been acknowledged that teacher behaviour affects student learning (Solomon, Rosenberg, & Bezdek, 1964). It has been documented that learners recognize these benefits (Merritt, 2008). Soliciting student feedback appears to address learner needs and can demonstrate concern for their progress (Schmidt, Parmer, & Bohn, 2005). The principles identified by Chickering and Gamson can be considered in all aspects of teaching and learning, including the process of student feedback and teaching improvement.
The administrative need for accountability of its processes has been understood and discussed in the literature (de los Santos & Finger, 1994). The process of student evaluation of instruction has been linked by some researchers to the industrial production model of the early part of the twentieth century, about the time when such student surveys had their beginnings. The industrial production model of that time, dubbed Taylorism after one of its key proponents, valued managerial oversight of all production processes and treated each employee as a “cog” who would carry out management instructions without question (Bonstingl, 1992, p. 67). In this model, the quality of products or services were assessed at the end of the production process when the number of defects were counted. Student evaluation today, by and large, tends toward managerial control over processes and terminal assessments of quality (Jones & Timmerman, 1994).
Instructors can benefit from guidance that would improve their teaching practices and behaviours. As described above, a number of beneficial teaching practices have been identified and supported in the literature. Where instructors may need assistance is in identifying the practices in which they are proficient and in which they may require improvement. Means for identifying these gaps in practice include student evaluation of instruction although alternatives exist and, arguably, may be more effective. Existing student evaluation of instructional methods have been challenged on a number of fronts. The Doctor Fox study of Naftulin, Ware, and Donnelly (1973) and related studies by later researchers provided literally dramatic evidence of bias effects on student evaluations of instruction (Abrami, Leventhal, & Perry, 1982; Marsh & Ware, 1982; Ware & R. G. Williams, 1975; R. G. Williams & Ware, 1977). Merritt enumerates many of these (2008). Ambady and Rosenthal point out how even short impressions of others affect subsequent evaluations (1993). The basic premise of these studies is that bias and stereotypes of all kinds enter into our social interactions and consequent assessments of others. Student evaluations are no different. In this context, these biases and stereotypes can have damaging effects on instructor performance and learner benefits. Crumbley and Fliedner described the effect of “impression management” where instructors actively modify their behaviours to affect student evaluations without improving teaching effectiveness (2002, p. 214). Williams and Ceci had demonstrated this effect earlier (1997). Biases may be far from innocuous. Post-secondary institutions may inadvertently marginalize minority staff when anonymous, form-based feedback is instituted. Racism, sexism, ageism, and other prejudices have been identified as factors in student evaluations (Huemer, n.d.; Merritt, 2008; Smith, 1999). Factors external to the instructor, such as the classroom space, textbooks, and curriculum, have also been seen to influence student evaluations (Abrami et al., 1982; Crumbley & Fliedner, 2002).
Because there are “perhaps hundreds” of unaccounted variables in student evaluation (Abrami et al., 1982, p. 457), instructors have been observed to modify aspects of their behaviour in an effort to improve otherwise poor student evaluations without actually improving teaching or learning success (W. M. Williams & Ceci, 1997). Evidence of this phenomenon, as well as of grade inflation and lowered academic rigour, have been documented in several instances (Crumbley & Fliedner, 2002; Merritt, 2008). Some studies indicate that such slackening of standards have little bearing on student evaluation results (Centra, 2003). Regardless of the effect on student evaluations, lowered academic expectations as a strategy to avert poor student evaluations is likely to deteriorate potential learning outcomes.
Such student evaluation approaches that perpetuate an industrial revolution conception of the educational process have been called “a seriously flawed paradigm” (Clayson & Sheffet, 2006, p. 159). In addition to the criticisms of existing student evaluation processes already described, a lack of scientific and philosophical consensus has also been documented in the literature (Abrami et al., 1982; Ambady & Rosenthal, 1993; Crumbley & Fliedner, 2002; Merritt, 2008).
A number of alternatives for teaching evaluation and improvement have been offered in the literature. Among these are approaches that involve an instructor’s academic peers such as peer review and mentoring, instructor coaching, and faculty peer supervision (Crumbley & Fliedner, 2002). Observation and coaching by third-party specialists, such as psychologists, have been described (Wilson, 1986). Instructors may also provide teaching portfolios, teaching materials, and other documentary evidence of their teaching practice (Crumbley & Fliedner, 2002; Jones & Timmerman, 1994).
In addition to the alternatives described, post-secondary education can also consider the potential of TQM and quality circles for the purpose of student feedback and teaching improvement. Quality circles “allow teachers and students to become co-learners and to share the responsibility for teaching and learning through shared empowerment and ownership of the course” (Schmidt et al., 2005, p. 2). Deming (1986) described the origins of TQM and is acknowledged to have given this management theory much of its current form (Bonstingl, 1992; Jones & Timmerman, 1994; Merritt, 2008; Schmidt et al., 2005; Weaver, 1992). Deming itemized a fourteen point approach toward management for quality, including the admonishment to “cease dependence on inspection to achieve quality. Eliminate the need for inspection on a mass basis by building quality into the product in the first place” (1986, p. 23). TQM was eventually adopted into post-secondary education, first at a management level and later through student quality circles (Bonstingl, 1992; Levin, 1995; Useem, 1996). For the purpose of improving the application of TQM in post-secondary education, Jones and Timmerman (1994) developed a common vocabulary with ties to pedagogical research. TQM was eventually seen to expand in recognition and use in Canadian community colleges (Knowles, 1994; Levin, 1995; Richardson & Wolverton, 1994).
Several examples of TQM and quality circles in education can be found in the literature. Knowles described how Red River College in 1991, prior to its transition to board governance, adopted and implemented TQM (1994). Useem described quality circles in undergraduate and graduate level courses (1996). Orts described the use of quality circles in law education (1997). Shariff described how quality circles were used in business administration studies and the degree of their success as a curriculum item (1999). Schmidt et al. described how quality circles were implemented in food science studies (2005). For the purposes of developing and implementing student quality circles, Nuhfer provided an extensive guide (2004).
Analysis of Implementations
Weaver (1992) considered how TQM might be implemented in education. In that discussion, students were viewed as both customers and employees. This view underscored the students’ participation and involvement in the learning process. Instructors were urged to perceive the learning process from the students’ perspective. Instructors were to participate more as part of the administrative team and were to gain additional power and responsibility as a result. The benefit of TQM in this new arrangement would be to provide continuous monitoring of the teaching and learning process.
Using similar considerations, actual implementations of quality circle processes for the purpose of student feedback and teaching improvement have been made in a number of post-secondary institutions. The following are descriptions and analyses of several of these implementations.
Useem (1996) used quality circle feedback in a number of undergraduate and graduate-level courses at the Wharton School at University of Pennsylvania. The process involved describing quality circles to the students at the beginning of each school term. The instructor would ask for several volunteers to participate in quality circle meetings and the names of the volunteers were announced and made available to the other students. Quality circle members were asked to solicit ongoing feedback from the students. Meetings were brief but scheduled at frequent intervals. At the meetings the instructor would provide a copy of the syllabus and would review the meeting procedure to ensure that key learning and classroom issues would be addressed. The quality circle members would report back to the students after each meeting. Informal discussions would take place between meetings. At the conclusion of the school term there would be a concluding meeting to record insights for future quality circle participants.
Orts (1997) described quality circles used at the University of Michigan Law School. At the start of each school term, the use of quality circles was described and quality circle members either volunteered or were elected. Students were encouraged to give input to the quality circle members. Meetings were held every three weeks and were shorter than an hour in duration. It was indicated that quality circle participation was not counted toward grades or credits. An effort was made to have diverse representation on the quality circle teams with special attention to students who might otherwise be marginalized. Meetings would focus on classroom dynamics and the teaching environment and processes. The quality circle members would report back to the students after each meeting.
Shariff (1999) provided a description of quality circles that were implemented as a project in an undergraduate business program curriculum. It was not clear how the quality circles in this example aided teaching but it was clear that students learned about the process. Volunteers were solicited and a facilitator, not the instructor, would guide the students through the quality circle process.
Schmidt at al (2005) described the implementation of quality circles in a large introductory undergraduate food science course. The researchers cited Nuhfer (2004) as a reference for their design. It was noted that each quality circle team conferred only for the duration of a single school term. The concept and purpose of quality circles was described at the beginning of the school term to the entire class. Volunteer members were informed that no credit or grade was given for participation in the quality circle. The only compensation was lunch provided during the meetings. A diverse group of members was selected. Those members were then assigned to represent specific, random sets of students. A number of alternates were also selected in the event that regular quality circle members were unable to attend. Meetings were held to discuss a variety of course issues. An assessment of the quality circle process was made at the conclusion of the school term which was archived and communicated to subsequent quality circle teams.
Merritt provided a description of a process similar to quality circles called “small group instructional diagnosis” (2008, p. 281). This process made use of a facilitator who met first with the instructor to learn about course goals, course materials, and teaching strategies that the instructor had planned. Along with the facilitator, students would then participate as a group to “discuss their perspectives …, expanding the information available to each student, checking individual biases, establishing accountability, and implicitly noting the seriousness of the process and need for accuracy” (Merritt, 2008, pp. 281-282).
What was common to all the above examples was their implementation in post-secondary education. Orts, Shariff, Schmidt et al., and Useem all indicated a communication to students of quality circle processes and the soliciting of volunteers at the beginning of the school term. Orts and Schmidt et al. sought to represent the diversity of their students in the quality circles and expressly indicated that no credit or grade were given for participation. Orts and Useem announced quality circle members to the other students, had short and frequent meetings, and provided reports to the other students after the meetings. Merritt, Orts, Schmidt et al., and Useem described the topics to be covered in the quality circle meetings. Schmidt at al. and Useem both described concluding meetings at the end of term to record quality circle experiences for future use (Merritt, 2008; Orts, 1997; Schmidt et al., 2005; Shariff, 1999; Useem, 1996).
Some differences between the implementations are noted. In the case of Schmidt et al., the researchers indicated that a meal was provided as compensation at quality circle meetings. Merritt and Shariff indicated the use of a facilitator instead of the instructor. Orts indicated the possible election of quality circle participants. Schmidt et al. mentioned the use of alternate, ad hoc quality circle members in the event of absent regular members. Orts and Useem differed in their descriptions of the direction of student input: Orts described how students were urged to give input to quality circle members while Useem encouraged quality circle members to actively engage the students (Merritt, 2008; Orts, 1997; Schmidt et al., 2005; Useem, 1996).
The practices described in the above implementations can be seen to connect to the principles described by Chickering and Gamson (1987). Quality circle meetings provide a platform for dialogue between instructor and students, therefore encouraging such contact. Dialogue among students about issues that affect their learning provides a way for them to develop cooperation. By engaging with the instructor about the strategies and techniques involved in the teaching, students are actively involved in their own learning. Frequent meetings support the idea of prompt feedback. Considering the teaching and learning techniques allows students to develop better strategies for their own study time and “when students are engaged as active participants in the educational process, they better understand the process of teaching and thereby their own learning” (Kinasevych, 2009, pp. 4-5). The quality circle process allows instructors to maintain and emphasize high standards. By implementing quality circles instructors have a way of addressing marginalized learners and diversity in the classroom.
Stakeholders whose involvement are addressed in this paper are learners, faculty, administration, and support staff. Learners are the direct beneficiaries of any improvements that may arise from quality circle processes aimed at instructional activities. Faculty are the key implementers of any improvements identified in quality circle meetings. Faculty also provide significant expertise in the areas of teaching and learning and in the practice of curriculum delivery. Administration allocates resources and manages institution-wide processes and standards. Support staff play a role in supporting the faculty and learners in processes outside the classroom.
As noted earlier, Chickering and Gamson identified active learning among their principles (1987). Students who are involved in their own learning may benefit in terms of their learning success. Quality circles can provide a means through which learners can become involved, either as meeting participants or simply through engagement with quality circle representatives. The implementation of quality circles can provide evidence of the commitment learners expect on the part of their post-secondary institution.
Faculty may benefit from quality circles by having their efforts better directed to immediate learner needs. In the case of terminal student evaluations, corrective action may not occur until the survey participants have completed the evaluations. With quality circles, instructors can respond immediately to feedback while a course is still active. Quality circles are a tool that can address the principles of Chickering and Gamson (1987). Quality circles can address the multiple behavioural variables that can affect learning outcomes and provide instructors a way to present their classroom strategies and address the relevance of external factors.
Administration may appear to lose some power in the implementation of quality circles. The gathering of student feedback would become less formal and the process would be more in the control of faculty. At the same time, the potential for improved learning success could surpass that of terminal student evaluations and effective instructors would not be subject to bias that has been evidenced in anonymous student evaluations. Administration would need to carefully assess the potential benefit against the complexity that quality circles can introduce. Were faculty to introduce quality circles alongside terminal student evaluations of instruction, they may be more inclined to incorporate improvements indicated by the quality circles than by the student evaluations, simply by merit of the time invested in the process.
Support staff could become more involved in academic processes with the use of quality circles. Quality circles can more readily identify influences on learning that may occur outside the classroom but still within an institution’s control. Ongoing dialogue with students about their learning processes may well engage non-faculty staff for addressing external factors in and outside the classroom.
The current mechanisms for learner feedback suit the needs of administration but may not be best suited for the needs of learners, nor of faculty. These mechanisms may do little to involve other stakeholders in the process, including support staff. Alternatives to current student evaluation methods are available and there is evidence that they are effective and possibly better suited to the requirements of all stakeholders. Among these alternatives are quality circles. Quality circles can be seen to reinforce research-supported principles of effective instruction. The process for implementing quality circles does not appear to be overly complex in the examples discussed in this paper. The positive impact of quality circles on learner success ought to be considered given the apparent inadequacy of current methods for student feedback.
Abrami, P. C., Leventhal, L., & Perry, R. P. (1982). Educational seduction. Review of Educational Research, 52(3), 446-464. doi: 10.3102/00346543052003446
Ambady, N., & Rosenthal, R. (1993). Half a minute: Predicting teacher evaluations from thin slices of nonverbal behavior and physical attractiveness. Journal of Personality and Social Psychology, 64(3), 431-441. doi: 10.1037/0022-3518.104.22.1681
Arreola, R. A. (2006). Developing a comprehensive faculty evaluation system (3rd ed.). Bolton, Massachusetts: Anker Publishing Company.
Bonstingl, J. (1992). The total quality classroom. Educational Leadership, 49(6), 66-70.
Centra, J. A. (2003). Will teachers receive higher student evaluations by giving higher grades and less course work? Research in Higher Education, 44(5), 495-518.
Chickering, A. W., & Gamson, Z. F. (1987). Seven principles for good practice in undergraduate education. AAHE Bulletin, 3-7.
Clayson, D. E., & Sheffet, M. J. (2006). Personality and the student evaluation of teaching. Journal of Marketing Education, 28(2), 149-160.
Crumbley, D. L., & Fliedner, E. (2002). Accounting administrators’ perceptions of student evaluation of teaching (SET) information. Quality Assurance in Education, 10, 213-222. doi: 10.1108/09684880210446884
Deming, W. E. (1986). Out of the crisis. Cambridge, Massachusetts: MIT Press.
Gay, L. R., Mills, G. E., & Airasian, P. W. (2009). Educational research: Competencies for analysis and applications (9th ed.). Upper Saddle River, New Jersey: Prentice Hall.
Huemer, M. (n.d.). Student evaluations: A critical review. Retrieved November 9, 2009, from http://home.sprynet.com/~owl1/sef.htm
Jones, J., & Timmerman, L. (1994). Total quality management assessment, teaching, and learning: Toward a campuswide language and system. Community College Journal of Research and Practice, 18(4), 411-18. doi: 10.1080/1066892940180409
Kanuka, H., Marentette, P., Braga, J., Campbell, K., Harvey, S., Holte, R., et al. (2009). Evaluation of teaching at the U of A (University of Alberta). University of Alberta.
Kinasevych, O. (2009, November 21). Student evaluation of instruction: A reflection on a contemporary issue in Canadian post-secondary education. Unpublished manuscript, Winnipeg, Canada.
Knowles, T. (1994). Total quality management (TQM) in a community college. Retrieved from http://www.eric.ed.gov/ERICWebPortal/contentdelivery/servlet/ERICServlet?accno=ED370616
Levin, J. (1995). The challenge of leadership. In J. D. Dennison (Ed.), Challenge and opportunity: Canada’s community colleges at the crossroads (pp. 105-120). UBC Press.
Marsh, H. W., & Ware, J. E. (1982). Effects of expressiveness, content coverage, and incentive on multidimensional student rating scales: New interpretations of the Dr. Fox effect. Journal of Educational Psychology. 74(1), 126-134.
Merritt, D. J. (2008). Bias, the brain, and student evaluations of teaching. St. John’s Law Review, 82(1), 235-287.
Naftulin, D. H., Ware, J. E., & Donnelly, F. A. (1973). The Doctor Fox lecture: A paradigm of educational seduction. Journal of Medical Education, 48(7), 630-635.
Nuhfer, E. B. (2004). A Handbook for Student Management Teams. Idaho State University. Retrieved from http://www.isu.edu/ctl/facultydev/webhandbook/smt.htm
Orts, E. W. (1997). Quality circles in law teaching. Journal of Legal Education, 47(3), 425-431.
Richardson, R. C., & Wolverton, M. (1994). Leadership strategies. In Managing Community Colleges (pp. 40-59). San Francisco: Jossey-Bass.
de los Santos, A. G. J., & Finger, S. (1994). Managing educational operations. In Managing Community Colleges (pp. 422-438). San Francisco: Jossey-Bass.
Schmidt, S., Parmer, M., & Bohn, D. (2005). Using quality circles to enhance student involvement and course quality in a large undergraduate food science and human nutrition course. Journal of Food Science Education, 4(1), 2-9. doi: 10.1111/j.1541-4329.2005.tb00049.x
Shariff, S. H. (1999). Students’ quality control circle: A case study on students’ participation in the quality control circle at the Faculty of Business and Management. Assessment & Evaluation in Higher Education, 24(2), 141-146.
Smith, P. J. (1999). Teaching the retrenchment generation: When Sapphire meets Socrates at the intersection of race, gender, and authority. William and Mary Journal of Women and the Law, 6(1), 53-214.
Solomon, D., Rosenberg, L., & Bezdek, W. E. (1964). Teacher behavior and student learning. Journal of Educational Psychology, 55(1), 23-30. doi: 10.1037/h0040516
Useem, M. (1996). Talk about teaching: Using quality circles to master the classroom. Almanac, 43(5). Retrieved from http://www.upenn.edu/almanac/v43/n05/useem.html
Ware, J. E., & Williams, R. G. (1975). The Dr. Fox effect: A study of lecturer effectiveness and ratings of instruction. Journal of Medical Education, 50(2), 149-156.
Weaver, T. (1992). Total quality management. ERIC Digest, Number 73. Eugene, Oregon: ERIC Clearinghouse on Educational Management. Retrieved from http://www.eric.ed.gov/ERICWebPortal/contentdelivery/servlet/ERICServlet?accno=ED347670
Williams, R. G., & Ware, J. E. (1977). An extended visit with Dr. Fox: Validity of student satisfaction with instruction ratings after repeated exposures to a lecturer. American Educational Research Journal, 14(4), 449-457. doi: 10.3102/00028312014004449
Williams, W. M., & Ceci, S. J. (1997). “How’m I doing?” Problems with student ratings of instructors and courses. Change, 29(5), 12-23.
Wilson, R. C. (1986). Improving faculty teaching: Effective use of student evaluations and consultants. The Journal of Higher Education, 57(2), 196-211. doi: 10.2307/1981481
- Aabø (2005). The role and value of public libraries in the age of digital technologies.
- Gervais (2011). Finding the faithless: Perceived atheist prevalence reduces anti-atheist prejudice.
- Dorner & Gorman (2011). Contextual Factors Affecting Learning in Laos and the Implications for Information Literacy Education.
- Gervais, Shariff, & Norenzayan (2011). Do you believe in atheists? Distrust is central to anti-atheist prejudice.
- Goleman (2000). Emotional intelligence: Issues in paradigm building.
- Smith (1996). David A. Kolb on experiential learning.
- Bruffee (1995). Sharing our toys: Cooperative learning versus collaborative learning.
- Britz (2004). To Know or Not to Know: A Moral Reflection on Information Poverty.
- Wright (2010). Twittering in teacher education.
- Kinasevych (2011). Considering culture in e-learning environments and post-secondary learning success (Abstract)
- Wicks et al. (2011). bPortfolios: Blogging for reflective practice.
- Rinaldo, Tapp, & Laverie (2011). Learning by tweeting: Using Twitter as a pedagogical tool.
- Gabriel & Richtel (2011). A Classroom Software Boom, but Mixed Results Despite the Hype.
- Elavsky, Mislan, & Elavsky (2011). When talking less is more: Exploring outcomes of Twitter usage in the large‐lecture hall.
- Junco, Heiberger, & Loken (2011). The effect of Twitter on college student engagement and grades.
- Higdon, Reyerson, McFadden & Mummey (2011). Twitter, Wordle, and ChimeIn as Student Response Pedagogies.
- Veltsos & Veltsos (2010). Teaching responsibly with technology-mediated communication.
- Dunlap & Lowenthal (2009). Horton Hears a Tweet.
- Blankenship (2011). How Social Media Can and Should Impact Higher Education.
- Young (2010). Teaching with Twitter: Not for the Faint of Heart.
FeedbackComments, suggestions, criticisms, and any type of feedback would be greatly appreciated. Use the comment tools provided in the articles or use any of the means indicated on the Contact page to reach the author.