Thursday, 5 December 2013

Instructional Design Models and Theories History


  1. 1903 – Ivan Pavlov discovers Classical Conditioning Theory, while conducting research on the digestive system of dogs
  2. 1910 – Thorndike introduces its Laws and Connectionism Theory, which are based on the Active Learning Principles
  3. 1922 – Max Wertheimer, Kurt Koffka and Wolfgang Köhler introduce Gestalt Psychology
  4. 1932 - Psychologist Frederic Bartlett proposes the Schema Theory
  5. 1937 - B.F. Skinner introduces the Operant Conditioning Theory
  6. 1937 - May and Doob publish Cooperation and Competition, where the Collaborative Learning Model is launched, discussed and analyzed
  7. 1950s - The Information Processing Theory emerges
  8. 1950s - Computer-based Instruction is used in educational and training environments
  9. 1954 - Skinner introduces the Programmed Instruction Educational Model
  10. 1960s - The Inquiry-based Learning Model is developed, based on constructivist learning theories
  11. 1961 - Jerome Bruner introduces the Discovery Learning Model
  12. 1960s - Howard Barrows introduces Problem-based Learning (PBL) in the medical education program at McMaster University in Canada
  13. 1963 – David Ausubel publishes his findings on the Subsumption Theory
  14. 1962 - The Keller Plan revolves around the Individualized Instruction Model and is used in educational environments throughout the United States
  15. 1971 - Allan Paivio hypothesized about the Dual Coding Theory; a theory of cognition
  16. 1974 – Merlin Wittrock publishes the Generative Learning Theory
  17. 1978 – Lev Vygotsky releases information about his Social-Cultural Learning Theory
  18. 1979 – Charles Reigeluth introduces the Elaboration Theory
  19. 1980 - Reginald Revans introduces the Action Learning Model
  20. 1983 - David Merrill introduces the Component Display Theory and Instructional Model
  21. 1983 - J. M. Keller's ARCS Model of Motivation Model is published
  22. 1983 – The first Computer-supported Intentional Learning Environments (CSILEs) prototype is used in a university setting
  23. 1988 - Spiro, Feltovich, and Coulson introduce their Cognitive Flexibility Theory
  24. 1989 - Brown, Collins, Duguid and Newman introduce their Situated Cognition Theory and the Cognitive Apprenticeship Model
  25. 1990 - The Cognition & Technology Group at Vanderbilt University develops the Anchored Instruction Educational Model
  26. 1990s - Multimedia and CD-ROMs are introduced in educational environments
  27. 1991 - Lave and Wenger introduce the Communities of Practice Model and the Situated Learning Theory in "Situated learning: legitimate peripheral participation"
  28. 1991 - Hudspeth and Knirk publish the case-based Learning Model in Performance Improvement Quarterly
  29. 1992 - Roger C. Schank releases a technical report, introducing the Goal-based Scenario Model
  30. 1995 - Saltzberg and Polyson publish Distributed Learning on the World Wide Web, which outlines the Distributed Learning Model
  31. 1995 - Dodge and March develop WebQuest
  32. 1996 - Professor Joseph R. Codde publishes a report that outlines Contract Learning
  33. 2007 - M. Lombardi publishes a report, outlining the Authentic Learning Model

Wednesday, 24 July 2013

Robert Mager's performance based learning objectives

Robert Mager's almost simplistic adage about setting quantifiable and measurable learning outcomes is so simple, yet so powerfully logical. If used along with the other two principles of conditions (under which the performance has to be done) and criteria of acceptable performance, it becomes a cornerstone for all instructional interventions, be it classroom or eLearning.
Robert's principles always help to answer critical questions while designing training programs or developing storyboards for eLearning courses.
  • What will be content for the subject?
  • What will be the duration of learning?
  • What will be the instructional strategy (how best can a particular topic be presented)?
  • What will be the type of assessments?
Some of the learning professionals have expressed their views on what they think of Mager's principles and to what extent are they used in today's world of rapid eLearning. Here are some of the responses:
Both in favor & against Robert Mager's Objectives:
  1. I think it's important to remember that when Mager wrote his books on training and instruction design, that e-learning wasn't around. That in no way diminishes the value of his works, or the benefits that instructional designers can gain from them today.
In fact, I would make the case that Mager's teaching in his book, "Making Instruction Work," is especially useful in today's rapid e-learning environment. For example, consider that Mager proposed that choosing a delivery method for training has to come after the training has been designed, else the medium you choose may not be able to support the content.
Today, we most often see the delivery method chosen (i.e., e-learning) before the first objective is drafted. The unanticipated consequence of this decision can be that the training is designed to fit the medium, rather than the required objectives.
  1. I remember that at a conference I attended, someone posed the question "Would you send your child to a school where the only qualifications the teachers had were in computer programming?"
It sometimes seems as if that is the situation we have in e-learning today. The easy access to sophisticated tools for e-learning means that many people involved in learning design have no qualifications or experience in learning, but just have an ability to use the software.
That makes it apparently unnecessary to use tools like Mager's performance objectives, or for that matter any of the other elements of learning theories that have been developed and validated over the years. Instead, what we tend to see is a dump of information presented using various types of PowerPoint on steroids, relying on 'interactions' that are no more than technically sophisticated 'Press any key to continue'.
Yes, performance objectives are still vital if we are to create learning experiences that do actually help people to learn rather than providing some limited entertainment.
  1. Yes and No. Mager's basic three parts of the objectives continue to be seen in ALL versions of how to write good objectives that I've found. Some people have built on them, avoiding Gagne's objective format, he over thought on the plumbing with the main addition usually being "audience."
What I recommend (and last I checked was common for education too) is the ABCD (or ABC's) version of Audience, Behavior, Condition, and Degree (or Standard).
It is seldom that these foundational concepts in how we learn effectively become passé because they are foundational! They lay the groundwork that we continue to improve on or use effectively.
  1. Mager's principles will always be relevant to learners, supervisors, the vision and values of an organization, its senior leaders and those that design and deliver learning: Identify the desired performance action, environment and standards compared to the existing performance, environment and standards.
Design solutions to close gaps between existing and desired performance addressing the work, the worker and the workplace as an entire system, employing best practices in instruction, process improvement, project management and performance management.
And in terms of sensitivity, you can break this into clear behaviors and skills and you do not even have to be "sensitive" to actively listen, skillfully use open ended and closed ended questions to glean information, skillfully recognize successful behaviors and activities in others, all skills (among many) that must be orchestrated to show leadership or expert "sensitivity". There are also many, many ways to weave affective domains into learning sequences. Therapy infers aberration and wellness infers fitness. This "training" is also performance based and must be under the auspices of an EAP and a good gym!
  1. I am a strong supporter of Mager objectives for, as you say, defining them answers critical questions during the design phase of a behavior change initiative. However, in my professional experience (10 years), I have only had one employer who understood their function, mostly, the management I have worked for deemed such detailed objectives as unnecessary time spent.
What is ironic though is a well defined Mager objective can provide the rarely achieved level 4 Kirkpatrick form of evaluation. When level 4 Kirkpatrick is achieved, there is a clear definition to the success of the original behavior change.
My real experience is that training professionals who lack a formal instructional design education miss out on Mager objectives and the ease with which they develop the training materials. Rather, I have seen most training use purpose statements or goal statements (such as, at the end of this course you will be able to do BLANK) as objectives. Management seems to be able to grasp these statements better than a detailed Mager objective.
My personal recipe for great behavior change initiatives combines the Dick and Carey instructional design method with Kirkpatrick's means of evaluation and Mager objective statements.
  1. Mager's guidance for writing objectives is timeless. But there also myths that dirties the waters. Examples:
"You can write up objectives to reflect what you want to accomplish in your course." No, objectives should reflect the real world (business) requirements that are driving the need for training (if in fact training is required); they should not be drafted to reflect my existing course or my preconceived preferences for what should be in a course. Along the same lines, I reject the view that we should have two sets of objectives: "Performance" objectives and "Teaching/Learning/Course" objectives. This is a recipe for confusion. Performance objectives can and do provide laser sharp direction for what is needed from the developer, from the trainer, and from the learner.
"It wastes too much time." One well known author and conference speaker denigrates the ADDIE process and, along with objectives, would throw the whole thing out. To convince his followers, he paints a picture of those that use ADDIE and performance based methods as rigid and inflexible, resulting in unacceptably long development cycles. It is the straw man argument. I believe the ADDIE process can be flexible and more like a spiral than a straight line. Not only can the analysis part of ADDIE be "rapid" it can also save time by ensuring that the project, when done, actually accomplishes what the business requires - less tangential errors.
"Performance objectives are good for visible, hands-on types of tasks, but are less applicable to invisible or mental action types of tasks like problem-solving." This is not true for the person skilled in analysis and writing good objectives. The task analysis involved is more difficult and requires a better analyst, but ANY task is, by definition, a performance. As Mager would say, such tasks require "indicators." In addition, if broad and fuzzy goals are thrown around, Mager would say that a "goal analysis" is required.
I find that most people that deprecate Mager's focus on performance objectives have not fully read all of his materials.
All of that said, I do think that Mager's views provide a foundation upon, which others have added value, including Ruth Clark and other cognitive practitioners in the area of design and development.
  1. So in a Mager objective, you specify the audience, the behavior, the condition(s), and finally the degree to which the behavior must be performed. Because each of these elements has been specifically stated, management can precisely identify the "degree targeted [envisioned] outcomes occur, as a result of the learning event(s) and subsequent reinforcement." (Kirkpatrick, J., Kirkpatrick, W., 2009).http://www.trainingmag.com
I have always interpreted Kirkpatrick Level 4 as the cultural/social/organizational level of measurement. So, to effectively measure an organization's performance, one needs to understand specifics of the organization.
Let me illustrate this for you with the use of a simple contact center environment from a customer's perspective. In this environment there are three main audiences: Front line agents, resolution agents, and management. As a group, the overall targeted behavior is to provide white-glove service. However, each group has differing conditions under which they can provide white-glove service. The front line agents have the least amount of authority, while authority increases for resolution agents, and is unlimited for management. And finally, the degree of performance increases as the role increases in complexity - management is expected to perform significantly better than front line agents.
Bringing this back around to Mager- say a training program is launched for the whole of the contact center. Because of the three audiences, which perform with varying conditions and expectations, the training cannot be a one-shot solution; a more complex solution is required. After the training occurs, management will want to know the overall impact the training had on their operation. If the impact is anything less than the target, management will want to know what failed. Because specifics were defined in the Mager objectives, learning professional can analyze performance scores (Kirkpatrick Level 3) and identify the weak spot within the organization, which can be resolved through subsequent reinforcement (or other approach).
I hope this provides clarity on how I connect Mager objectives to Kirkpatrick evaluation and measurement.

  1. Training that is aligned with organizational goals through human performance is successful training. Mager established the methodology to obtain that type alignment. More importantly, by stressing the need for observable criteria within objectives, Mager established the cornerstone of measurable ROI. In today's business environment, where more and more emphasis is placed on quantifying the value of training, Mager's ideas are absolutely essential.
  2. I agree with that Robert Mager is as relevant today as ever. Training cannot be correctly targeted without performance-based learning objectives. Why is this crucial requirement ignored? One reason perhaps is that IDs/trainers find creating objectives extremely challenging as it forces them to rethink ideas about what needs to be taught, and consequently how.
  3. Mager's logic is still sound and largely ignored. I think rapid development mentality has forced us to overlook tenets of good objective design. I rarely see a decent overt objective anymore. I suggest going a step beyond performance-based objectives and consider pursuing outcomes-based instead. While both need alignment, outcomes imply sustainable performance at level 3+. Mager is great for establishing organizational alignment in training objectives, but I would recommend you look to Geary, Rossett, & Robinson for aligning learning's contribution to sustainability.

SME Vs IDE

SMEs are the Subject Matter Experts, while you are the instructional designer expert. A SME provides the content while you arrange that content into material that can easily be learned:

SMEs are responsible on how tasks, to include the order of performance steps, are to be performed, while instructional designers are responsible on how that material will be presented (e.g., demonstrate — practice — hands-on test).

SMEs are responsible with the technical-jargon, while instructional designers decide if that jargon needs to be explained.

SMEs are responsible for acceptable performance levels, while instructional designers decide how that performance will be evaluated (e.g., written, hands-on, oral).

SMEs are responsible for providing the performance objectives, while instructional designers are responsible for turning the objectives into viable learning or performance objectives (task, conditions, and standards) and experiences.