Speech sound production is one of the most complex human activities: it is also one of the least well understood. This is perhaps not altogether surprising as many of the complex neurological and physiological processes involved in the generation and execution of a speech utterance remain relatively inaccessible to direct investigation, and must be inferred from careful scrutiny of the output of the system -from details of the movements of the speech organs themselves and the acoustic consequences of such movements. Such investigation of the speech output have received considerable impetus during the last decade from major technological advancements in computer science and biological transducing, making it possible now to obtain large quantities of quantative data on many aspects of speech articulation and acoustics relatively easily. Keeping pace with these advancements in laboratory techniques have been developments in theoretical modelling of the speech production process. There are now a wide variety of different models available, reflecting the different disciplines involved -linguistics, speech science and technology, engineering and acoustics. The time seems ripe to attempt a synthesis of these different models and theories and thus provide a common forum for discussion of the complex problem of speech production. Such an activity would seem particularly timely also for those colleagues in speech technology seeking better, more accurate phonetic models as components in their speech synthesis and automatic speech recognition systems.