Publication Date
| In 2026 | 0 |
| Since 2025 | 2 |
| Since 2022 (last 5 years) | 6 |
| Since 2017 (last 10 years) | 9 |
| Since 2007 (last 20 years) | 41 |
Descriptor
Source
Author
Publication Type
Education Level
Location
| Spain | 4 |
| Australia | 3 |
| Greece | 2 |
| United Kingdom | 2 |
| United Kingdom (England) | 2 |
| Canada (Montreal) | 1 |
| France | 1 |
| Kentucky | 1 |
| Netherlands | 1 |
| New York | 1 |
| Pennsylvania | 1 |
| More ▼ | |
Laws, Policies, & Programs
| Rehabilitation Act Amendments… | 1 |
Assessments and Surveys
What Works Clearinghouse Rating
Peer reviewedHavice, Michael – Journalism Quarterly, 1989
Examines the electronic polling process (telephone polling that uses synthesized or digitized voice). Compares the process with two similar telephone polls and provides a basic cost efficiency comparison between the polls. Finds that digitized systems place more calls but get lower response rates than regular phone surveys. (MM)
Descriptors: Artificial Speech, Audience Response, Communication Research, Comparative Analysis
Peer reviewedSapir, Shimon; And Others – Journal of Speech and Hearing Research, 1993
Thirteen university students listened to synthesized vowels, presented 14 times randomly, and uttered each of the vowels as soon as they heard it. Serial analysis of successive auditory-vocal reaction times (AVRTs) revealed significant intrasession and intersession decreases in AVRTs in the majority of subjects. AVRT increases were also seen but…
Descriptors: Artificial Speech, Change, Communication Disorders, Perceptual Motor Coordination
Peer reviewedDrager, Kathryn D. R.; Reichle, Joe E. – Journal of Speech, Language, and Hearing Research, 2001
This study investigated whether discourse context affected the intelligibility of synthesized sentences for young adult and older adult listeners. Findings indicated a significant facilitating effect of context wherein previous words and sentences are related to later sentences for both listener groups. Results have direct implications for…
Descriptors: Adults, Artificial Speech, Augmentative and Alternative Communication, Communication Disorders
Segers, Eliane; Verhoeven, Ludo – Journal of Communication Disorders, 2005
In the present study, it was investigated whether kindergartners with specific language impairment (SLI) and normal language achieving (NLA) kindergartners can benefit from slowing down the entire speech signal or part of the speech signal in a synthetic speech discrimination task. Subjects were 19 kindergartners with SLI and 24 NLA controls.…
Descriptors: Artificial Speech, Language Impairments, Auditory Discrimination, Kindergarten
Evitts, Paul M.; Searl, Jeff – Journal of Speech, Language, and Hearing Research, 2006
The purpose of this study was to compare listener processing demands when decoding alaryngeal compared to laryngeal speech. Fifty-six listeners were presented with single words produced by 1 proficient speaker from 5 different modes of speech: normal, tracheosophageal (TE), esophageal (ES), electrolaryngeal (EL), and synthetic speech (SS).…
Descriptors: Artificial Speech, Reaction Time, Cognitive Processes, Intermode Differences
Vincent, A. T. – 1981
Designed primarily to enable blind students to more easily use the Open University's Student Computer Services, a project was implemented to generate computer programs (in BASIC) for a limited configuration microcomputer with synthetic speech as the only output program. Three techniques can be identified with synthetic speech generation:…
Descriptors: Accessibility (for Disabled), Artificial Speech, Blindness, Computer Software
Peer reviewedHebert, Bobbie M.; Murdock, Jane Y. – Learning Disabilities Research and Practice, 1994
Three sixth-grade students with language learning disabilities performed better on learning vocabulary words when using computer-aided instruction (CAI) with speech output than CAI without speech. Two students did better using CAI with digitized speech, and one student made greater gains using CAI with synthesized speech. (Author/JDD)
Descriptors: Artificial Speech, Computer Assisted Instruction, Intermediate Grades, Language Impairments
Elman, Jeffery Locke; Zipser, David – 1987
The back-propagation neural network learning procedure was applied to the analysis and recognition of speech. Because this learning procedure requires only examples of input-output pairs, it is not necessary to provide it with any initial description of speech features. Rather, the network develops on its own set of representational features…
Descriptors: Articulation (Speech), Artificial Speech, Communication Research, Computers
Haskins Labs., New Haven, CT. – 1977
This report is one of a regular series on the status and progress of studies on the nature of speech, instrumentation for its investigation, and practical applications. The ten papers treat the following topics: speech synthesis as a tool for the study of speech production; the study of articulatory organization; phonetic perception; cardiac…
Descriptors: Acoustic Phonetics, Articulation (Speech), Artificial Speech, Auditory Perception
Willis, Clodius – 1969
These experiments investigated and described intra-subject, inter-subject, and inter-group variation in perception of synthetic vowels as well as the possibility that inter-group differences reflect dialect differences. Two tests were made covering the full phonetic range of English vowels. In two other tests subjects chose between one of two…
Descriptors: Acoustic Phonetics, Artificial Speech, Auditory Perception, Dialect Studies
Peer reviewedKannenberg, Patricia; And Others – Journal of Communication Disorders, 1988
The intelligibility of two voice-output communication aids ("Personal Communicator" and "SpeechPAC'") was evaluated by presenting synthesized words and sentences to 20 listeners. Analysis of listener transcriptions revealed significantly higher intelligibility scores for the "Personal Communicator" compared to the…
Descriptors: Artificial Speech, Assistive Devices (for Disabled), Communication Aids (for Disabled), Communication Disorders
Williams, John M. – American Education, 1984
The use of talking microprocessors and terminals in helping blind, visually impaired, speech impaired, and other handicapped people receive an education is described. The process of creating synthetic speech is examined, as well as how it helps in the classroom. The federal government's promotion and funding of synthetic speech research is also…
Descriptors: Artificial Speech, Blindness, Computer Assisted Instruction, Computers
Ainsworth, William A. – IEEE Transactions on Audio and Electroacoustics, 1973
Research supported by the Science Research Council. (DD)
Descriptors: Articulation (Speech), Artificial Speech, Computers, Data Analysis
Peer reviewedLeong, Che Kan – Learning Disability Quarterly, 1995
This study investigated the role of online reading and simultaneous DECtalk (a text-to-speech computer system) auding in helping 192 above-average and below-average readers comprehend expository prose. Results showed significant differences among grades, reading levels, and modes of responses to the reading passages, but not for the experimental…
Descriptors: Artificial Speech, Computer Assisted Instruction, Elementary Education, Expository Writing
Peer reviewedSteffens, Michele L.; And Others – Journal of Speech and Hearing Research, 1992
This study examined the abilities of 18 adults with familial dyslexia to use steady state, dynamic, and temporal cues in synthetic speech continua. Although subjects were able to label and discriminate the continua, they did not necessarily use acoustic cues in the same manner as did normal readers, and their overall performance was less accurate.…
Descriptors: Acoustic Phonetics, Adults, Artificial Speech, Auditory Discrimination

Direct link
