Week 10 Journal

I like the flipped structure of learning this week. We are able to approach to materials and take them in at our own space. Discussions were held on Google+ and readings for in-class activities were available on Google drive. By taking the lecture section home with us, we have time left to do exercise, ask questions and seek help in the classroom. The biggest advantage of flipped class is that students are doing actual exercise in a place that has teacher available, no matter whether in face-to-face way or online, give them instant feedback. The benefit of synchronous communication.

During the class break, one thing Pim and I talked to Tom was the authority issue. Sometimes, as an instructional designer, you want to bring in new tech to the department, or you are excited about a new teaching method and believe that it will improve learning outcomes after appropriate implementation, or you feel that it is the right time to start an innovation program to close the discrepancy. The problem is, however, you don’t have adequate authority to realize your plan. Tom said that an instructional designer gives recommendations and negotiates with other stakeholders. It is not a role that will be involved in the whole process. If they still don’t buy it even if you have demonstrated advantages of new things, then just let it go because it is not your fault. Obviously, their negative reaction will upset you. It is a situation “when instructional designers find themselves in conflict with institutional values, and powerless to effect necessary change”. Thus frustration arises from a lack of coherence between institutional and personal values.

The question of “does someone wanting to be an instructional designer in higher education need teaching experience” reminds me of a conversation with a PhD student in IDDE. She told me that some faculty members are confident of the content, course design and delivery. Some still refuse to change even if you show them there is room for improvement. But if you have teaching experience, especially in the same or similar area, they are more likely to adopt your suggestions. I feel that even in United States, where ID originated and has been evolving for more than 70 years, faculty members and members might heard of it, but not all of them understand what an instructional designer could contribute to learning and teaching except the technological support.

 

Advertisements

Module 9

We learnt things about model development from guest lecture this week. They are really helpful for our own model design work. A model enables you to explain what goes on in the situation and the components in the system. An instructional system model is not static. As a visual way to represent the dynamic process, it predicts what might happen as the revelation of variables that have mutual influences. Assumptions are made during model development. Three aspects should be taken into account: assumption of context, assumption of role of instructional design, and assumption of role of learner. You need to consider what would happen if assumptions are changed because in real world, things might be total different. Romi’s story is impressive. They just assumed those workers could read and then decided to teach them chemistry as the solution to close the gap while in fact they lacked adequate literacy skills.

The discussion on system approach versus non system ones in instructional design is interesting. It seems that at first people treated them in a dichotomous way, a “either/or problem” with clear line drawn between two ends. When ideas of “hard system” and “soft system” were posed, it turned out that it would be better to regard them as two separate points that are located in a continuum. There is no way to quantify them. Hard systems are those with everything well-defined (e.g. components, relationships between components, means). Soft systems rely more on intuition and experience. Unlike hard system that has everything predetermined, when using a soft model to solve problems, you need to make decisions by yourself according to the specific context. So it involves decision-making. None system approaches are something related to art and reflection and they are on the other side of the continuum. 

Module 8

Web 2.0 technologies change our life largely. People’s role shift from a user to a participator. You can do more than merely visiting websites. Everyone contributes to the content online via blogging, wiki, uploading videos etc.. Two way information flow on the Internet today is realized through instant message tools. We can find the change in role in online courses. Students are no longer the passive knowledge receiver. Interpersonal interactions in online education is one key factor to concern when developing web-based instruction. Online learning differs from traditional classroom instruction in several ways. According to transactional distance theory, psychological misunderstanding and communication gap result from physical separation. Also, oftentimes the extent of diversity of learners’ backgrounds in online courses is greater than of classroom teaching. Thus analysis of learner characteristics becomes harder because people so different. I like the idea we discussed yesterday that in this situation, giving learners options to choose the type or format they prefer is better than trying to find a balance to meet everyone’s demand in instruction.

I have to say yesterday’s class reminds me of IDE 611, which is about technology used in educational settings. One thing we kept on talking was “integrating technology into education effectively”. But it is easier to say than done. Problems will arise without comprehensive consideration. 

As for model of developing web-based products, one thing pops up in my mind is prototyping model of software development. One example in that model survey book is Boehm’s spiral model. Given that Web supported instruction is developed for learning that occurs within online environment, a model of web-based courses development, I personally believe, can borrow something from those prototyping models.

 

Model Update Report

Image

Input: Anything used in Process phase, including information, resources (human & material) and ideas. In this context, it refers to predetermined National College English Syllabus, some existing learning materials (textbooks, video & audio, etc.) and data of students’ learning outcomes (e.g. grades in CET & course final exams)

Process: Actions taken in this stage to do something with information gained in Input phase. It involves a series of activities. It is a process of developing curriculum based on requirements and analysis results. Overlap parts indicates that courses are interconnected with each other. One course might be the perquisite of another one or there is consistency in contents. Therefore, instructional designers should pay attention to individual course as well as relations between courses (e.g. sequence, content) .More details in each oval will be specified in figure 2. Before jumping in level 2 development (course-level), some issues need to be considered: overall learning environments features, how to divide courses, in what sequence do we arrange them.

Test & Feedback: The test stage provides opportunity to examine the validity and credibility of all courses developed and the effectiveness of the curriculum as a whole. Focus group or pilot group can be used. One approach is gathering data of focus group’s performance in CET to see where there is room for improvement. Thereby feedback informs us to revise the project. It is an ongoing process.

Output: Output in this model refers to final products of the project. In this curriculum-oriented context, it covers several things: modification or creation of textbooks and related learning materials; individual course that is fully developed in every aspect (e.g. strategies, delivery, evaluation instruments; how it can be related to other courses…)  

 

Image

Figure 2, whose origin is from Kemp, Morris & Ross Model, represents the process of developing an individual course across the curriculum.

Analysis:

Instructional problems—based on data gathered in Input stage, identify instructional problems and specify goals for developing this course

Learner characteristics—Examine students characteristics that should receive attention during planning; it should be noted that arts students and science & engineering students usually hold different attitudes and expectations towards English learning.  

Task &Objective analysis—Identify subject contents, and analyze task components related to stated goals and objectives( alignment).

Design:

Content sequence—sequence content with each instructional unit for logic learning.

Instructional strategies—design appropriate instructional strategies to help learners master learning objectives.

Design messages & instructional delivery—planning instructional message and ways to deliver

Development:

Development of instruction and materials—develop the overall instruction; creating materials to support instruction or activities or selecting existing materials.

Evaluation:

Evaluation instruments—develop evaluation tools to assess learning outcomes.

Revision and Formative Evaluation: a continuous and ongoing process; a variety of techniques & strategies can be used to conduct the evaluation such as pilot groups.

Planning, Summative Evaluation, Support Services and Project Management: an emphasis on how to manage the instructional design process.

 

Week 10 Journal

Before doing this week’s reading, I could only see the similarities among diverse ISD models. And I felt that a lot of them merely seemed to be another version of ADDIE model because they all contained Analysis, Design, Development and Evaluation sections. As for the differences, I knew those models varied in some dimensions such as the contexts and formats, but it was hard to clarify them and establish schemata to categorize them. Now I learn that you can analyze a particular model in terms of orientation, knowledge structure, expertise, structure, context and level. It helps you to learn more about the model itself and in what situation it fits in. Models designed for educational purposes tend to teach “declarative knowledge” more while models for training purposes are more suitable for “procedural knowledge”. Also, compared to hard systems in which everything is well defined, soft systems are more complex and might not be a good option for novices instructional designers. Sometimes your decision is based on your previous experience, which I would rather call “implicit knowledge”. I don’t believe there are any “unteachable things”. The only problem is externalization and of course, it is hard.

Another issue mentioned in guest lecture is transfer paradox. He said the reason why some PhD students chose to stay in academia was that they had problems in transferring what they learnt at school to real world problems at work. Honestly, I doubted it. I tried to search it online and could not find any supporting materials. However I accidentally found an article about several transfer paradoxes learners confront and some suggestions to solve them.

A list of paradoxes discussed in that article are:

Paradox of finding prior knowledge;

Paradox of tacit knowledge;

Paradox of using relevant prior knowledge;

Paradox of recognizing relevant situations and conditions;

Paradox of near and far transfer;

The paradoxical “what “to transfer.

I think though the list might not be comprehensive but it provides a way to think of possible reasons when learning transfer fails to occur.

Reference

Simons, P. R. J. (1999). Transfer of learning: Paradoxes for learners.International Journal of Educational Research31(7), 577-589. 

Module 6 Evaluation

This week we talked about evaluation. There are usually two types of evaluation: formative evaluation and summative evaluation. We have learnt them and their purposes in IDE 631. While in most cases, a summative evaluation is conducted to see whether learners achieve predecided goals and objectives, people tend to neglect formative evaluation because of time or budget constraint. Sometimes, if the training or instruction is only a one time thing which is so well-defined that you can hardly generalize it, we do not necessarily need to conduct a formative evaluation. I still remember in IDE 631, the project I worked on was to design a 2-hour workshop for 24 students who were required to do image editing work in a given task. In this context, everything was clearly defined; from students characteristics to learning content. Part of my evaluation plan was methods & instrument of conducting a formative evaluation. I did it without thinking much. After I presented my work, Tiffany just asked me one question, “If you won’t do it twice, why do you design the formative evaluation?” Then I realized we can’t regard ADDIE model or any other theories as the bible. Instead we should modify them to fit the current situation.  

We also discussed the importance of involving both internal evaluator and external evaluator during the development and implementation. Through this mixed approaches, we not only increase the objectivity and independency of the evaluation, but ensure that experts who know the context well are included.

Kirkpatrick’s four-level of evaluation is a useful model that people could examine to what degree their evaluation achieves. For level 1, Reaction, interview, (usually attitude) questionnaire and observation can be utilized to see people’s reaction towards a specific item. Exam can be conducted at the Learning level. Evaluations on the third level, Performing, which is about learning transfer, are faced with some key issues such as when learning transfer occurs and how much can be transferred. It is not uncommon to see that learners did a good job in Level 2 but failed to perform well in their work. Reasons of it vary from inadequate time to a lack of motivation. The highest level, Results, is almost beyond consideration as its complexity. It is concerned with the long-term impact on the whole organization. 

Module 5 Journal

We talk about learning architecture this week. It is a “design of framework” that helps to organize the instruction. Decisions of what teaching strategies to be employed and how to deliver them are made here. There are a lot of interdependent variables that you need to take into account, for instance, the overall goal of the instruction/ training, practical economics constrain and culture/philosophy of a wider system. Strategies and techniques you use should conform to some first (learning & teaching) principles mentioned in class.

Both group size and group structure play a part in collaborative/cooperative learning. One thing I noticed in Romi’s book is that he said “heterogeneous groups are the best” when talking about ways to group students. Is heterogeneous grouping always the best choice in instructions at the school level, compared to homogeneous grouping and random grouping? From articles online, I found that groups composed of a mixture of students on different academic levels seems to be preferred by most people (although random grouping is the one I experienced most so far). Homogeneous grouping is blamed for lacking varied social interactions because everyone is on the same level or working at the same speed. Also, similarities may lead to boredom, and thus fail to motivate students engaging in class activities. These are shortcomings most widely discussed. However, no one could deny that people feel comfortable when they are grouped in this way. No one feels that they are above or below the academic challenge.

Reasons for thinking highly of heterogeneous grouping partially lie in the trend of peer learning in which high-achieving students can tutor and remediate less capable students. Groups of mixed abilities students could help to improve sharing capabilities and peer relations. The key point is to make sure higher performing students really learn something. The pay-off. One big concern over mixed abilities grouping is that by doing so it may slow down the learning of higher achieving students. Some research reveals that “heterogeneous grouping …have no negative affect on high performing students (according to their performance in standardized tests) and they have immensely positive effects on lower and middle level students.” Another issue is work distribution. Stronger and more dominate students do the work while others contribute nothing in a mixed abilities group. It is not rare to see. Perhaps it is not the way we group students the problem, but how to motivate less capably students to involve in learning that matters. Homogeneous grouping, however, could provide those less academic outstanding students with chances to explore their potential.

Reference

http://www.edutopia.org/blog/student-grouping-homogeneous-heterogeneous-ben-johnson

http://uege5102-09m.blogspot.com/2009/07/heterogeneous-and-homogeneous-grouping.html

Module 4- Objectives

This week we talk about objectives and why we need them in instructional design. Learning objectives ensure that both students and instructors are clear about wanted learning outcomes. It also helps to design evaluation instrument because evaluation items should align with the measurable objectives. Besides, it has been proved that students that know what exact terminal behaviors are more likely to learn better than those who doesn’t.

We discussed the advantages of using learning objectives in many aspects, which I thought were widely recognized in education. In nearly all my undergraduate courses, syllabi with specified objectives were distributed to students and the instructor often made a further explanation about them in the first class. So I was a little bit shocked when Prof. Pusch said that teachers & experts in fields other than education actually didn’t attach importance to writing good objectives. Then I asked my roommate who studies Finance here about her attitudes towards objectives and impact they created on her study. Her answer was “I have never read them though I know they are listed on the syllabus”. According to her, some professors in Whitman even never talked about learning objectives. “Sometimes you just feel confused because you don’t know what you are learning and which parts are important”, said she. Another example comes from my friends in LC Smith School. I have heard their complaints of test ever since the first quiz they toke. No relevance! They cried out. They told me it was so common to find nothing (occasionally a pretty small portion) related to the knowledge/ topic that dominated a two-hour class in an exam. Sadly and obviously, they didn’t get good scores. We don’t mean to encourage test-oriented learning here, but students should be able to differentiate key points in this chunk from trivial things after class. Perhaps their professor regards this sort of quiz & exams as “external stimulus”, but it brings too much negative effect on students’ confidence as well as their meta-cognitive knowledge of learning (they might question themselves “do I really acquire the knowledge”).

As for Bloom’s taxonomy of learning objectives, I have to say that I love that list. It is useful tool for instructional design and people could simply pick an appropriate verb that makes terminal behavior measurable. In a blog post I read recently, however, critiques on it were stated as below.

“One person planned a simple game to reinforce Bloom’s taxonomy. The group was divided into two teams, and one person at a time from each team came up to the front and faced each other across a table. The “game show host” read a “Bloom verb” off an index card and the contestants slapped the table to see who could classify it first.

What would you guess happened? Think about a verb like “Determine”: where would you classify it?

The game almost immediately devolved into arguments over where the verbs belong. The poor activity leader had consulted a single list and didn’t even consider that different lists categorize verbs differently. Sometimes a single list classifies verbs in different places. This Bloom verb list, for example, classifies “identify” as both Knowledge and Comprehension; another list puts “compare” and “contrast” both in Analysis and Evaluation, depending on whether you use them together or separately.”

I don’t think people’s different interpretations of a word would have much side impact on learning outcomes. Let’s say, the instructor use “compare” in objective statement with the initial purpose at the “Analysis” level, and students think that they should achieve the “Evaluation” level. Probably they would study harder and probably the final performance is beyond their expectation. Bloom’s taxonomy is illustrated in a pyramid, behaviors listed in the lower layers are usually part of advanced-level behavior in upper layers. “Compare” and “Contrast” are two things we do in evaluation. If students could reach a higher-level learning objective, they certainly have achieve the lower-level first.

Reference

http://christytucker.wordpress.com/2011/08/02/questioning-gagne-and-blooms-relevance/

Initial Plan for Model Design

Background Information

The context is about Chinese college student English learning. College English Test, or CET, is a national English as a foreign language test in China. The aim of the test is to examine the English proficiency of undergraduate to “reach the required English levels specified in the National College English Syllabus” (two versions: one for science and engineering students, one for students of arts). There are two levels of test: level 4 and level 6. Many schools even associate level 4 certificate with graduation diploma. If students don’t pass CET 4, they will only get a degree without a diploma. Students who got a CET 6 certificate are usually considered to be competitive in job market and more likely to be recruited. While CET was widely recognized as a standardized test to assess students’ English skills in the past, now numerous critiques question the validity & reliability of this format of evaluation. Some people argue that test that are mainly composed of multiple choice emphasize the score and test itself, rather than the belief of English as a tool for communication and cultural exchange. Also, critiques of CET expand to teaching method in classroom and learning contents. Therefore, reforms have been called out both in instruction and evaluation aspects for several years. (and indeed CET 4 have gone through some reforms, but questions and doubts remains)

In 2013, several changes in CET 4 were made to fit the requirement in the syllabus. Some measures were taken to increase the degree of authenticity of the test. Though most students still has no access to the oral test section (“Till now, only a small fraction of students who got 550 points and above (out of 710 full marks) in the paper test of CET-4 are qualified to apply for oral test of CET-4.”), different formats in listening, reading and translation sections appeared. For instance, dialogs in listening section are now related to daily life, becoming more “authentic”; there is no cloze test section any more, students are required to translate a Chinese paragraph into English instead. Topics in translation section covers a wide range of issues, including history, culture, social development and economic status. Some experts agreed that the new test tries to learn from IELTS and TOEFL to increase its validity and reliability.

According to results of an online survey, which was conducted by Sina Weibo ( a famous Chinese micro-blog platform), “23.1 percent out of 3,835 test-takers said they didn’t finish the listening comprehension section” and 38.4 % of all respondents felt the new test “ is really difficult” while around 20% of participants admitted they felt “totally lost ”. (Note: Since the institute that in charge of CET would publish each year’s test items after the exam, everyone could buy copies of exam papers; statement of the online survey use terms like” compared to the old version”. So basically most new test participators have ideas of what the old version looks like.)

Performance Problems

Problems arise here. Why more than half of all respondents felt they could not adjust to the new formats? Why nearly 60 percent of respondents either felt “too difficult” or “totally lost” (compared to the old version)? Remember that, new CET 4, as a new evaluation tool, in spite of existing faults, are on the right track. Obviously, students’ reactions show that they are not well-prepared yet. The problems then move to English teaching courses in college. From some articles and unstructured interview results, several common issues are: heavily teacher-centered instruction; boring learning contents; test-oriented instruction; students’ motivation and no connection between what is taught and what is tested. (Interesting, it seems to be the opposite of test-oriented instruction)

Where we are now: students don’t feel they learn much from the class; strategies utilized are merely for the exam, not learning; CET lacks adequate reliability and validity to evaluate English proficiency.

Where we want to go: College English class contributes to students’ English learning; strategies is used to facilitate teaching & learning; “authentic” and proper evaluation tool are developed to assess students in terms of English as communication tool.

Items listed above are descriptions of the “symptom” and situations we want to achieve after interventions being employed. The real problem is that students failed the reach the required English levels specified in the syllabus.

Assuming that certificates of CET 4  will still so important to college students (diploma & job market), and you are supposed to work on this discrepancy at the curriculum level, I would like to use the Morrison’s model (figure 2) to inform my ID model design because it is curriculum-oriented. It contains need assessment as the very first phase. Stockholder within the system include but not limited to students, school (teachers, technologists etc..), content experts, curriculum designer and instructional designers. External needs come from companies and the society. Key players are undergraduate students (English major students excluded). Outsides the system there are two other interdependent sections: National College English Syllabus and College English Test 4. If we look at the problem in a broader system, National College English Syllabus then becomes the input, the initial system turn to the process and students’ performance in CET 4 is the output. Therefore, when conducting a need assessment, all three sections illustrated below in figure 1 should be included.

In Morrison’s Model, Front-end analysis process include “instructional problem”, “learners’ characteristics”, “task analysis”. The purpose is to figure out what is the gap and what causes the gap. Alongside possible reasons within the system, syllabus and College English Test might also contribute to students’ poor performance. So in this context, we need also concern with interrelated factors outside the system in need assessment.

 Image

Image

Reference:

http://usa.chinadaily.com.cn/china/2013-12/16/content_17175808.htm

http://survey.edu.sina.com.cn/result/86802.html

http://elearningcurve.edublogs.org/2009/06/10/discovering-instructional-design-11-the-kemp-model/