Thursday, June 27, 2013

Educational Psychology

Educational Psychology is the theoretical and research branch of modern psychology, concerned with the learning processes and psychological problems associated with the teaching and training of students. The educational psychologist studies the cognitive development of students and the various factors involved in learning, including aptitude and learning measurement, the creative process, and the motivational forces that influence dynamics between students and teachers. Educational psychology is a partly experimental and partly applied branch of psychology, concerned with the optimization of learning. It differs from school psychology, which is an applied field that deals largely with problems in elementary and secondary school systems. Educational psychology traces its origins to the experimental and empirical work on association and sensory activity by the English anthropologist Sir Francis Galton, and the American psychologist G. Stanley Hall, who wrote The Contents of Children’s Minds (1883). The major leader in the field of educational psychology, however, was the American educator and psychologist Edward Lee Thorndike, who designed methods to measure and test children’s intelligence and their ability to learn. Thorndike proposed the transfer-of-training theory, which states that “what is learned in one sphere of activity ‘transfers’ to another sphere only when the two spheres share common ‘elements.’ ”

MOTIVATIONAL ASPECTS OF THINKING

MOTIVATIONAL ASPECTS OF THINKING

The problem to be taken up and the point at which the search for a solution will begin are customarily prescribed by the investigator for a subject participating in an experiment on thinking (or by the programmer for a computer). Thus, prevailing techniques of inquiry in the psychology of thinking have invited neglect of the motivational aspects of thinking. Investigation has barely begun on the conditions that determine when the person will begin to think in preference to some other activity, what he will think about, what direction his thinking will take, and when he will regard his search for a solution as successfully terminated (or abandon it as not worth pursuing further). Although much thinking is aimed at practical ends, special motivational problems are raised by “disinterested” thinking, in which the discovery of an answer to a question is a source of satisfaction in itself.
In the views of the Gestalt school and of the British psychologist Frederic C. Bartlett, the initiation and direction of thinking are governed by recognition of a “disequilibrium” or “gap” in an intellectual structure. Similarly, Piaget’s notion of “equilibration” as a process impelling advance from less-equilibrated structures, fraught with uncertainty and inconsistency, toward better-equilibrated structures that overcome these imperfections was introduced to explain the child’s progressive intellectual development in general. Piaget’s approach may also be applicable to specific episodes of thinking. For computer specialists, the detection of a mismatch between the formula that the program so far has produced and some formula or set of requirements that define a solution is what impels continuation of the search and determines the direction it will follow.
Neobehaviourism (like psychoanalysis) has made much of secondary reward value and stimulus generalization—i.e., the tendency of a stimulus pattern to become a source of satisfaction if it resembles or has frequently accompanied some form of biological gratification. The insufficiency of this kind of explanation becomes apparent, however, when the importance of novelty, surprise, complexity, incongruity, ambiguity, and uncertainty is considered. Inconsistency between beliefs, between items of incoming sensory information, or between one’s belief and an item of sensory information evidently can be a source of discomfort impelling a search for resolution through reorganization of belief systems or through selective acquisition of new information.
The motivational effects of such factors began receiving more attention in the middle of the 20th century, mainly because of the pervasive role they were found to perform in exploratory behaviour, play, and aesthetics. Their larger role in all forms of thinking has come to be appreciated and has been studied in relation to curiosity, conflict, and uncertainty.

Types of thinking
Philosophers and psychologists alike have long realized that thinking is not of a “single piece.” There are many different kinds of thinking, and there are various means of categorizing them into a “taxonomy” of thinking skills, but there is no single universally accepted taxonomy. One common approach divides the types of thinking into problem solving and reasoning, but other kinds of thinking, such as judgment and decision making, have been suggested as well.
Problem solving
Problem solving is a systematic search through a range of possible actions in order to reach a predefined goal. It involves two main types of thinking: divergent, in which one tries to generate a diverse assortment of possible alternative solutions to a problem, and convergent, in which one tries to narrow down multiple possibilities to find a single, best answer to a problem. Multiple-choice tests, for example, tend to involve convergent thinking, whereas essay tests typically engage divergent thinking.

THE PROBLEM-SOLVING CYCLE IN THINKING
Many researchers regard the thinking that is done in problem solving as cyclical, in the sense that the output of one set of processes—the solution to a problem—often serves as the input of another—a new problem to be solved. The American psychologist Robert J. Sternberg identified seven steps in problem solving, each of which may be illustrated in the simple example of choosing a restaurant:
Problem identification. In this step, the individual recognizes the existence of a problem to be solved: he recognizes that he is hungry, that it is dinnertime, and hence that he will need to take some sort of action.
Problem definition. In this step, the individual determines the nature of the problem that confronts him. He may define the problem as that of preparing food, of finding a friend to prepare food, of ordering food to be delivered, or of choosing a restaurant.
Resource allocation. Having defined the problem as that of choosing a restaurant, the individual determines the kind and extent of resources to devote to the choice. He may consider how much time to spend in choosing a restaurant, whether to seek suggestions from friends, and whether to consult a restaurant guide.
Problem representation. In this step, the individual mentally organizes the information needed to solve the problem. He may decide that he wants a restaurant that meets certain criteria, such as close proximity, reasonable price, a certain cuisine, and good service.
Strategy construction. Having decided what criteria to use, the individual must now decide how to combine or prioritize them. If his funds are limited, he might decide that reasonable price is a more important criterion than close proximity, a certain cuisine, or good service.
Monitoring. In this step, the individual assesses whether the problem solving is proceeding according to his intentions. If the possible solutions produced by his criteria do not appeal to him, he may decide that the criteria or their relative importance needs to be changed.
Evaluation. In this step, the individual evaluates whether the problem solving was successful. Having chosen a restaurant, he may decide after eating whether the meal was acceptable.
This example also illustrates how problem solving can be cyclical rather than linear. For example, once one has chosen a restaurant, one must determine how to get there, how much to tip, and so on.

The process of thought

The process of thought
According to the classical empiricist-associationist view, the succession of ideas or images in a train of thought is determined by the laws of association. Although additional associative laws were proposed from time to time, two invariably were recognized. The law of association by contiguity states that the sensation or idea of a particular object tends to evoke the idea of something that has often been encountered together with it. The law of association by similarity states that the sensation or idea of a particular object tends to evoke the idea of something that is similar to it. The early behaviourists, beginning with Watson, espoused essentially the same formulation but with some important modifications. For them the elements of the process were conceived not as conscious ideas but as fractional or incipient motor responses, each producing its proprioceptive stimulus. Association by contiguity and similarity were identified by these behaviourists with the Pavlovian principles of conditioning and generalization.
The Würzburg school, under the leadership of the German psychologist and philosopher Oswald Külpe, saw the prototype of directed thinking in the “constrained-association” experiment, in which the subject has to supply a word bearing a specified relation to a stimulus word (e.g., an opposite to an adjective, or the capital of a country). Introspective research led the members of the Würzburg school to conclude that the emergence of the required element depends jointly on the immediately preceding element and on some kind of “determining tendency” such as Aufgabe (“awareness of task”) or “representation of the goal.” The last two factors were held to impart a direction to the thought process and to restrict its content to relevant material. Their role was analogous to that of motivational factors—“drive stimuli,” “fractional anticipatory goal responses”—in the later neobehaviouristic accounts of reasoning (and of behaviour in general) produced by Hull and his followers. Hull’s theory resembled the earlier “constellation theory” of constrained association developed by Georg Elias Müller. Hull held that one particular response will occur and overcome its competitors because it is associated both with the cue stimulus (which may be the immediately preceding thought process or an external event) and with the motivational condition (task, drive stimulus) and is thus evoked with more strength than are elements associated only with the cue stimulus or the motivational condition. The German psychologist Otto Selz countered that in many situations this kind of theory would imply the occurrence of errors as often as correct answers to questions and thus was untenable. Selz contended that response selection depends rather on a process of “complex completion” that is set in motion by an “anticipatory schema,” which includes a representation of both the cue stimulus and the relation that the element to be supplied must bear to the cue stimulus. The correct answer is associated with the schema as a whole and not with its components separately. Selz’s complex completion resembles the “eduction of correlates” that the British psychologist Charles E. Spearman saw as a primary constituent of intellectual functioning, its complement being “eduction of relations”—that is, recognition of a relation when two elements are presented.
The determination of each thought element by the whole configuration of factors in the situation and by the network of relations linking them was stressed still more strongly by the Gestalt psychologists in the 1920s and ’30s. On the basis of experiments by Wolfgang Köhler (on “insightful” problem solving by chimpanzees) and Max Wertheimer and his student Karl Duncker (on human thinking), they pointed out that the solution to a problem commonly requires an unprecedented response or pattern of responses that hardly could be attributed to simple associative reproduction of past behaviour or experiences. For them, the essence of thinking lay in sudden perceptual restructuring or reorganization, akin to the abrupt changes in appearance of an ambiguous visual figure.
The Gestalt theory has had a deep and far-reaching impact, especially in drawing attention to the ability of the thinker to discover creative, innovative ways of coping with situations that differ from any that have been encountered before. This theory, however, has been criticized for underestimating the contribution of prior learning and for not going beyond rudimentary attempts to classify and analyze the structures that it deems so important. Later discussions of the systems in which items of information and intellectual operations are organized have made fuller use of the resources of logic and mathematics. Merely to name them, they include the “psychologic” of Piaget, the computer simulation of human thinking by the American computer scientists Herbert A. Simon and Allen Newell, and extensions of Hull’s notion of the “habit-family hierarchy” by Irving Maltzman and Daniel E. Berlyne.
Also important is a growing recognition that the essential components of the thought process, the events that keep it moving in fruitful directions, are not words, images, or other symbols representing stimulus situations; rather, they are the operations that cause each of these representations to be succeeded by the next, in conformity with restrictions imposed by the problem or aim of the moment. In other words, directed thinking can reach a solution only by going through a properly ordered succession of “legitimate steps.” These steps might be representations of realizable physicochemical changes, modifications of logical or mathematical formulas that are permitted by rules of inference, or legal moves in a game of chess. This conception of the train of thinking as a sequence of rigorously controlled transformations is buttressed by the theoretical arguments of Sechenov and of Piaget, the results of the Würzburg experiments, and the lessons of computer simulation.
Early in the 20th century, the French physician Édouard Claparède and the American philosopher John Dewey both suggested that directed thinking proceeds by “implicit trial-and-error.” That is to say, it resembles the process whereby laboratory animals, confronted with a novel problem situation, try out one response after another until they sooner or later hit upon a response that leads to success. In thinking, however, the trials were said to take the form of internal responses (imagined or conceptualized courses of action, directions of symbolic search); once attained, a train of thinking that constitutes a solution frequently can be recognized as such without the necessity of implementation through action and sampling of external consequences. This kind of theory, popular among behaviourists and neobehaviourists, was stoutly opposed by the Gestalt school, whose insight theory emphasized the discovery of a solution as a whole and in a flash.
The divergence between these theories appears, however, to represent a false dichotomy. The protocols of Köhler’s chimpanzee experiments and of the rather similar experiments performed later under Pavlov’s auspices show that insight typically is preceded by a period of groping and of misguided attempts at a solution that are eventually abandoned. On the other hand, even the trial-and-error behaviour of an animal in a simple selective-learning situation does not consist of a completely blind and random sampling of the behaviour of which the learner is capable. Rather, it consists of responses that very well might have succeeded if the circumstances had been slightly different.
Newell, Simon, and the American computer scientist J. Clifford Shaw pointed out the indispensability in creative human thinking, as in its computer simulations, of what they called “heuristics.” A large number of possibilities may have to be examined, but the search is organized heuristically in such a way that the directions most likely to lead to success are explored first. Means of ensuring that a solution will occur within a reasonable time, certainly much faster than by random hunting, include adoption of successive subgoals and working backward from the final goal (the formula to be proved, the state of affairs to be brought about).

Thoughts

Thoughts are covert symbolic responses to stimuli that are either intrinsic (arising from within) or extrinsic (arising from the environment). Thought, or thinking, is considered to mediate between inner activity and external stimuli.
In everyday language, the word thinking covers several distinct psychological activities. It is sometimes a synonym for “tending to believe,” especially with less than full confidence (“I think that it will rain, but I am not sure”). At other times it denotes the degree of attentiveness (“I did it without thinking”) or whatever is in consciousness, especially if it refers to something outside the immediate environment (“It made me think of my grandmother”). Psychologists have concentrated on thinking as an intellectual exertion aimed at finding an answer to a question or the solution of a practical problem.
The psychology of thought processes concerns itself with activities similar to those usually attributed to the inventor, the mathematician, or the chess player, but psychologists have not settled on any single definition or characterization of thinking. For some it is a matter of modifying “cognitive structures” (i.e., perceptual representations of the world or parts of the world), while others regard it as internal problem-solving behaviour.
Yet another provisional conception of thinking applies the term to any sequence of covert symbolic responses (i.e., occurrences within the human organism that can serve to represent absent events). If such a sequence is aimed at the solution of a specific problem and fulfills the criteria for reasoning, it is called directed thinking. Reasoning is a process of piecing together the results of two or more distinct previous learning experiences to produce a new pattern of behaviour. Directed thinking contrasts with other symbolic sequences that have different functions, such as the simple recall (mnemonic thinking) of a chain of past events.
Historically, thinking was associated with conscious experiences, but, as the scientific study of behaviour (e.g., behaviourism) developed within psychology, the limitations of introspection as a source of data became apparent; thought processes have since been treated as intervening variables or constructs with properties that must be inferred from relations between two sets of observable events. These events are inputs (stimuli, present and past) and outputs (responses, including bodily movements and speech). For many psychologists such intervening variables serve as aids in making sense of the immensely complicated network of associations between stimulus conditions and responses, the analysis of which otherwise would be prohibitively cumbersome. Others are concerned, rather, with identifying cognitive (or mental) structures that consciously or unconsciously guide a human being’s observable behaviour.
The prominent use of words in thinking (“silent speech”) encouraged the belief, especially among behaviourist and neobehaviourist psychologists, that to think is to string together linguistic elements subvocally. Early experiments revealed that thinking is commonly accompanied by electrical activity in the muscles of the thinker’s organs of articulation (e.g., in the throat). Through later work with electromyographic equipment, it became apparent that the muscular phenomena are not the actual vehicles of thinking; they merely facilitate the appropriate activities in the brain when an intellectual task is particularly exacting. The identification of thinking with speech was assailed by the Russian psychologist Lev Semyonovich Vygotsky and by the Swiss developmental psychologist Jean Piaget, both of whom observed the origins of human reasoning in children’s general ability to assemble nonverbal acts into effective and flexible combinations. These theorists insisted that thinking and speaking arise independently, although they acknowledged the profound interdependence of these functions.
Following different approaches, three scholars—the 19th-century Russian physiologist Ivan Mikhailovich Sechenov; the American founder of behaviourism, John B. Watson; and Piaget—independently arrived at the conclusion that the activities that serve as elements of thinking are internalized or “fractional” versions of motor responses. In other words, the elements are considered to be attenuated or curtailed variants of neuromuscular processes that, if they were not subjected to partial inhibition, would give rise to visible bodily movements.
Sensitive instruments can indeed detect faint activity in various parts of the body other than the organs of speech—e.g., in a person’s limbs when movement is thought of or imagined without actually taking place. Recent studies show the existence of a gastric “brain,” a set of neural networks in the stomach. Such findings have prompted theories to the effect that people think with the whole body and not only with the brain, or that, in the words of the American psychologist B.F. Skinner, “thought is simply behaviour—verbal or nonverbal, covert or overt.”
The logical outcome of these and similar statements was the peripheralist view. Evident in the work of Watson and the American psychologist Clark L. Hull, it held that thinking depends on events in the musculature: these events, known as proprioceptive impulses (i.e., impulses arising in response to physical position, posture, equilibrium, or internal condition), influence subsequent events in the central nervous system, which ultimately interact with external stimuli in guiding further action. There is, however, evidence that thinking is not prevented by administering drugs that suppress all muscular activity. Furthermore, it has been pointed out by researchers such as the American psychologist Karl S. Lashley that thinking, like other more-or-less skilled activities, often proceeds so quickly that there is not enough time for impulses to be transmitted from the central nervous system to a peripheral organ and back again between consecutive steps. So the centralist view—that thinking consists of events confined to the brain (though often accompanied by widespread activity in the rest of the body)—gained ground later in the 20th century. Nevertheless, each of these neural events can be regarded both as a response (to an external stimulus or to an earlier neurally mediated thought or combination of thoughts) and as a stimulus (evoking a subsequent thought or a motor response).
The elements of thinking are classifiable as “symbols” in accordance with the conception of the sign process (“semiotics”) that grew out of the work of philosophers (e.g., Charles Sanders Peirce), linguists (e.g., C.K. Ogden and Ivor A. Richards), and psychologists specializing in learning (e.g., Hull, Neal E. Miller, O. Hobart Mowrer, and Charles E. Osgood). The gist of this conception is that a stimulus event x can be regarded as a sign representing (or “standing for”) another event y if x evokes some, but not all, of the behaviour (both external and internal) that would have been evoked by y if it had been present. When a stimulus that qualifies as a sign results from the behaviour of an organism for which it acts as a sign, it is called a “symbol.” The “stimulus-producing responses” that are said to make up thought processes (as when one thinks of something to eat) are prime examples.
This treatment, favoured by psychologists of the stimulus-response (S-R) or neo-associationist current, contrasts with that of the various cognitivist or neorationalist theories. Rather than regarding the components of thinking as derivatives of verbal or nonverbal motor acts (and thus subject to laws of learning and performance that apply to learned behaviour in general), cognitivists see the components of thinking as unique central processes, governed by principles that are peculiar to them. These theorists attach overriding importance to the so-called structures in which “cognitive” elements are organized, and they tend to see inferences, applications of rules, representations of external reality, and other ingredients of thinking at work in even the simplest forms of learned behaviour.
The school of Gestalt psychology holds the constituents of thinking to be of essentially the same nature as the perceptual patterns that the nervous system constructs out of sensory excitations. After the mid-20th century, analogies with computer operations acquired great currency; in consequence, thinking came to be described in terms of storage, retrieval, and transmission of items of information. The information in question was held to be freely translatable from one “coding” to another without impairing its functions. What came to matter most was how events were combined and what other combinations might have occurred instead.

Wednesday, June 26, 2013

Behaviourism

Behaviourism, a highly influential academic school of psychology that dominated psychological theory between the two world wars. Classical behaviourism, prevalent in the first third of the 20th century, was concerned exclusively with measurable and observable data and excluded ideas, emotions, and the consideration of inner mental experience and activity in general. In behaviourism, the organism is seen as “responding” to conditions (stimuli) set by the outer environment and by inner biological processes.
The previously dominant school of thought, structuralism, conceived of psychology as the science of consciousness, experience, or mind; although bodily activities were not excluded, they were considered significant chiefly in their relations to mental phenomena. The characteristic method of structuralism was thus introspection—observing and reporting on the working of one’s own mind.
The early formulations of behaviourism were a reaction by U.S. psychologist John B. Watson against the introspective psychologies. In Behaviorism (1924), Watson wrote that “Behaviorism claims that ‘consciousness’ is neither a definable nor a usable concept; that it is merely another word for the ‘soul’ of more ancient times. The old psychology is thus dominated by a subtle kind of religious philosophy.” Watson believed that behaviourism “attempted to make a fresh, clean start in psychology, breaking both with current theories and with traditional concepts and terminology” (from Psychology from the Standpoint of a Behaviorist, 3rd ed., 1929). Introspection was to be discarded; only such observations were to be considered admissible as could be made by independent observers of the same object or event—exactly as in physics or chemistry. In this way psychology was to become “a purely objective, experimental branch of natural science.” However abstract these proposals may seem, they have had a revolutionary influence on modern psychology and social science and on our conception of ourselves. Watson’s objectivist leanings were presaged by many developments in the history of thought, and his work typified strong trends that had been emerging in biology and psychology since the late 19th century. Thus, Watson’s desire to “bury subjective subject matter” received widespread support. Between the early 1920s and mid-century, the methods of behaviourism dominated U.S. psychology and had wide international repercussions. Although the chief alternatives to behaviourism (e.g., Gestalt psychology and psychoanalysis) advocated methods based on experiential data, even these alternatives accommodated the objectivist approach by emphasizing a need for objective validation of experientially based hypotheses.
The period 1912–30 (roughly) may be called that of classical behaviourism. Watson was then the dominant figure, but many others were soon at work giving their own systematic twists to the development of the program. Classical behaviourism was dedicated to proving that phenomena formerly believed to require introspective study (such as thinking, imagery, emotions, or feeling) might be understood in terms of stimulus and response. Classical behaviourism was further characterized by a strict determinism based on the belief that every response is elicited by a specific stimulus. 
A derivative form of classical behaviourism known as neobehaviourism evolved from 1930 through the late 1940s. In this approach, psychologists attempted to translate the general methodology prescribed by Watson into a detailed, experimentally based theory of adaptive behaviour. This era was dominated by learning theorists Clark L. Hull and B.F. Skinner; Skinner’s thought was the direct descendant of Watson’s intellectual heritage and became dominant in the field after the mid-1950s. Other important behaviourists included Hull-influenced Kenneth W. Spence; Neal Miller, who claimed that neuroscience is the most productive avenue in psychological research; cognitive theorist Edward C. Tolman; and Edwin R. Guthrie. Tolman and others brought about a liberalization of strict behaviourist doctrine. The posture toward objectivism remained fundamentally the same, even while admitting the existence of intervening (i.e., mental) variables, accepting verbal reports, and branching into areas such as perception. A natural outgrowth of behaviourist theory was behaviour therapy, which rose to prominence after World War II and focused on modifying observable behaviour, rather than the thoughts and feelings of the patient (as in psychoanalysis). In this approach, emotional problems are thought to result from faulty acquired behaviour patterns or the failure to learn effective responses. The aim of behaviour therapy, also known as behaviour modification, is therefore to change behaviour patterns. See also conditioning

Distance education

Distance education, e-learning,Distance learning and online learning is  form of education in which the main elements include physical separation of teachers and students during instruction and the use of various technologies to facilitate student-teacher and student-student communication. Distance learning traditionally has focused on nontraditional students, such as full-time workers, military personnel, and nonresidents or individuals in remote regions who are unable to attend classroom lectures. However, distance learning has become an established part of the educational world, with trends pointing to ongoing growth. An increasing number of universities provide distance learning opportunities. Students and institutions embrace distance learning with good reason. Universities benefit by adding students without having to construct classrooms and housing, and students reap the advantages of being able to work where and when they choose. Public school systems offer specialty courses such as small-enrollment languages and Advanced Placement classes without having to set up multiple classrooms. In addition, homeschooled students gain access to centralized instruction.

Characteristics of distance learning
Various terms have been used to describe the phenomenon of distance learning. Strictly speaking, distance learning (the student’s activity) and distance teaching (the teacher’s activity) together make up distance education. Common variations include e-learning or online learning, used when the Internet is the medium; virtual learning, which usually refers to courses taken outside a classroom by primary- or secondary-school pupils (and also typically using the Internet); correspondence education, the long-standing method in which individual instruction is conducted by mail; and open learning, the system common in Europe for learning through the “open” university.
Four characteristics distinguish distance learning. First, distance learning is by definition carried out through institutions; it is not self-study or a nonacademic learning environment. The institutions may or may not offer traditional classroom-based instruction as well, but they are eligible for accreditation by the same agencies as those employing traditional methods.
Second, geographic separation is inherent in distance learning, and time may also separate students and teachers. Accessibility and convenience are important advantages of this mode of education. Well-designed programs can also bridge intellectual, cultural, and social differences between students. 
Third, interactive telecommunications connect individuals within a learning group and with the teacher. Most often, electronic communications, such as e-mail, are used, but traditional forms of communication, such as the postal system, may also play a role. Whatever the medium, interaction is essential to distance education, as it is to any education. The connections of learners, teachers, and instructional resources become less dependent on physical proximity as communications systems become more sophisticated and widely available; consequently, the Internet, cell phones, and e-mail have contributed to the rapid growth in distance learning. Finally, distance education, like any education, establishes a learning group, sometimes called a learning community, which is composed of students, a teacher, and instructional resources—i.e., the books, audio, video, and graphic displays that allow the student to access the content of instruction. Social networking on the Internet promotes the idea of community building. On sites such as Facebook and YouTube, users construct profiles, identify members (“friends”) with whom they share a connection, and build new communities of like-minded persons. In the distance learning setting, such networking can enable students’ connections with each other and thereby reduce their sense of isolation.

Early history of distance learning
Geographical isolation from schools and dispersed religious congregations spurred the development of religious correspondence education in the United States in the 19th century. For example, the Chautauqua Lake Sunday School Assembly in western New York state began in 1874 as a program for training Sunday school teachers and church workers. From its religious origins, the program gradually expanded to include a nondenominational course of directed home reading and correspondence study. Its success led to the founding of many similar schools throughout the United States in the chautauqua movement. It was the demand by industry, government, and the military for vocational training, however, that pushed distance learning to new levels. In Europe, mail-order courses had been established by the middle of the 19th century, when the Society of Modern Languages in Berlin offered correspondence courses in French, German, and English. In the United States, companies such as Strayer’s Business College of Baltimore City (now Strayer University), which was founded in Maryland in 1892 and included mail-order correspondence courses, were opened to serve the needs of business employers, especially in the training of women for secretarial duties. Most nonreligious mail-order correspondence courses emphasized instruction in spelling, grammar, business letter composition, and bookkeeping, but others taught everything from developing esoteric mental powers to operating a beauty salon. The clear leader in correspondence course instruction in American higher education at the end of the 19th century was the University of Chicago, where William Rainey Harper employed methods that he had used as director of the Chautauqua educational system for several years starting in 1883.

Use of computers in instruction

The use of computers in education started in the 1960s. With the advent of convenient microcomputers in the 1970s, computer use in schools has become widespread from primary education through the university level and even in some preschool programs. Instructional computers are basically used in one of two ways: either they provide a straightforward presentation of data or they fill a tutorial role in which the student is tested on comprehension. If the computer has a tutorial program, the student is asked a question by the computer; the student types in an answer and then gets an immediate response to the answer. If the answer is correct, the student is routed to more challenging problems; if the answer is incorrect, various computer messages will indicate the flaw in procedure, and the program will bypass more complicated questions until the student shows mastery in that area.
There are many advantages to using computers in educational instruction. They provide one-to-one interaction with a student, as well as an instantaneous response to the answers elicited, and allow students to proceed at their own pace. Computers are particularly useful in subjects that require drill, freeing teacher time from some classroom tasks so that a teacher can devote more time to individual students. A computer program can be used diagnostically, and, once a student’s problem has been identified, it can then focus on the problem area. Finally, because of the privacy and individual attention afforded by a computer, some students are relieved of the embarrassment of giving an incorrect answer publicly or of going more slowly through lessons than other classmates. There are drawbacks to the implementation of computers in instruction, however. They are generally costly systems to purchase, maintain, and update. There are also fears, whether justified or not, that the use of computers in education decreases the amount of human interaction.
One of the more difficult aspects of instructional computers is the availability and development of software, or computer programs. Courseware can be bought as a fully developed package from a software company, but the program provided this way may not suit the particular needs of the individual class or curriculum. A courseware template may be purchased, which provides a general format for tests and drill instruction, with the individual particulars to be inserted by the individual school system or teacher. The disadvantage to this system is that instruction tends to be boring and repetitive, with tests and questions following the same pattern for every course. Software can be developed in-house, that is, a school, course, or teacher could provide the courseware exactly tailored to its own needs, but this is expensive, time-consuming, and may require more programming expertise than is available.