Although there are several models of learning that take into account second-order non-local phonological processes (Hayes and Wilson, 2008; Heinz, 2010), none of them were specifically designed to explain the implicit universality that a second-order non-local model implies the existence of a first-order model. However, it is possible that such models may «explain» the involvement (Pearl, 1988). «Explain» refers to methods for finding a single cause of an event based on independent evidence. In the case of consonantal harmony, a model must determine the source of the harmony model: a distant pattern that applies to first- and second-order models, a model that is only first-order, or a model that is only second-order. If the representations encoded in the model allow only first- and first- and second-order harmony models (but not only second-order models), when exposed to a second-order model, the model is obliged to explain the model only as a first- and second-order model. However, if the model is exposed to first-order information, it can view the data either as a first-order model or as a first- and second-order model. The model can choose between these two cases based on additional information, such as. B as a preference for the general and simpler model, or a distortion of interpolation over extrapolation. Both heuristics would lead the model to postulate a first-rate model. However, a heuristic in the direction of very general models could lead the model to the first and second order model. If stochastic mechanisms decide which heuristics to implement so that bias towards very general models has a low probability, extrapolation from a first-order model would occur a first- and second-order model with low frequency, which is why a small portion of learners derive a very general model. The implementation of theoretical learning models of optimality in a stochastic framework could shed light on how theoretical harmony models of optimality, such as the analysis of Rose and Walker (2004), can take these conclusions into account.

The present study tests the privileged nature of first-order non-local interactions in consonantal harmony with two adult artificial grammar learning experiences. In a paradigm of learning artificial grammar with adult speakers, adults are given a sample of a miniature language designed by experimenters that corresponds to one or more linguistic patterns (in this case, consonantal harmony). After the exhibition, participants receive a test of words in the language to assess whether learners have derived the model by which they were trained. It is also possible to test participants on new elements that require generalization beyond the initial training package. This allows the experimenter to test the degree of representation that learners derive from the exposure materials. The artificial grammar learning paradigm is an ideal method for testing the role of locality in the harmony processes of distant consonants. Unlike natural language learning contexts, it is possible to explore two different types of languages (first- and second-order non-localities) with minimal differences. It is also possible to test conclusions on new materials that might otherwise not be possible in a natural environment. Since the study of linguistic universals is fraught with confounding factors, support for universals among adult learners provides strong support for previously affirmed universal principles (Nevins, 2009). Previous research on learning artificial grammar to study vocal harmony (Finley & Badecker, 2008, 2009a, 2009b, in print; Pycha, Nowak, Shin and Shosted, 2003) and Consonant Harmony (Wilson, 2003) have shown that adult learners with relatively short training can acquire models of phonological correspondence. In these experiments, learners were exposed to pseudomorphophonological models in which allomorphs depended on the harmonic property of the strain.

Participants learned patterns of harmony that were natural rather than unnatural. Natural models are both non-arbitrary (based on phonetics) and follow interlinguistic tendencies. For example, Finley and Badecker (2009a) showed that learners are able to learn a pattern of backward/round vocal harmony and extend this pattern to vowels that did not appear in the training set. However, this generalization only occurred when the new vowels had the characteristics necessary to trigger round harmony. These results suggest that learners form rules using natural traits and classes, but are sensitive to the representations required to participate in phonological processes. In many ways, consonant harmony seems to be an exception to the generalization that phonological processes are local. Consonantal harmony is more non-local than vocal harmony (i.e. subject to non-adjacent dependencies on a large number of syllables). Current experiments have shown that non-local instances of consonantal harmony of the first order seem to be privileged over non-local interactions of the second order. In an artificial grammar learning situation, learning a non-local second-order consonant harmonic model involves a first-order consonant harmony model, while learning a first-order consonant harmonic model does not involve a second-order, second-order harmonic model4. To address the differences in coarticulation between vowels and consonants, Gafos (1998) suggests that the obvious violations of strict locality in consonant harmony are due to the fact that such cases are not a «true» consonant harmony, but a process of copying characteristics (Gafos, 1998; Nevins, 2010; Rose and Walker, 2004).

Although Gafos argues that such cases of harmony copying characteristics are not technically «harmony», we follow Hansson (2001) and assume for the purposes of this article that all cases of consonantal agreement (local or distant) should be classified as consonantal harmony. The step-by-step approach to consonant harmony is based on theories of phonetic representations. Keating (1988) argues that the phonetic subspecification determines whether a process is applied locally or remotely. Phonetic subspecification occurs when a sound is not marked (neither in phonology nor in the lexicon) for a particular phonetic implementation. If a tone is not specified phonetically, the speaker can simply take the articulatory gesture of an adjacent sound (this process is called phonetic interpolation) and pronounce a certain value from another segment. This creates the effects of co-articulation and propagation. When a series of unspecified segments occurs, the segments assume the value of the co-articulated function at a certain distance, thus creating a non-local dispersion. This proposal can be adapted to animal proposals for consonant harmony.

For example, if you provide a separate display space for all whistled sound consonants, segmented segments specified as whistling sounds can interact with each other as if they were side by side, even if non-whistling segments occur When the whistling function receives its own layer, segments that do not have a whistling function (for example. B, voiceless keystrokes such as /t/) have no representation on the sibilical sound plane. . . .

© 2015 "El Renuevo" | Iglesia Cristiana Evangélica.
Seguinos en: