[:es]Publisher: eNeuro Link>

ABSTRACT

Variations in human behavior correspond to the adaptation of the nervous system to different internal and environmental demands. Attention, a cognitive process for weighing environmental demands, changes over time. Pupillary activity, which is affected by fluctuating levels of cognitive processing, appears to identify neural dynamics that relate to different states of attention. In mice, for example, pupil dynamics directly correlate with brain state fluctuations. Although, in humans, alpha-band activity is associated with inhibitory processes in cortical networks during visual processing, and its amplitude is modulated by attention, conclusive evidence linking this narrowband activity to pupil changes in time remains sparse. We hypothesize that, as alpha activity and pupil diameter indicate attentional variations over time, these two measures should be comodulated. In this work, we recorded the electroencephalographic (EEG) and pupillary activity of 16 human subjects who had their eyes fixed on a gray screen for 1 min. Our study revealed that the alpha-band amplitude and the high-frequency component of the pupil diameter covariate spontaneously. Specifically, the maximum alpha-band amplitude was observed to occur ∼300 ms before the peak of the pupil diameter. In contrast, the minimum alpha-band amplitude was noted to occur ∼350 ms before the trough of the pupil diameter. The consistent temporal coincidence of these two measurements strongly suggests that the subject’s state of attention, as indicated by the EEG alpha amplitude, is changing moment to moment and can be monitored by measuring EEG together with the diameter pupil.

[:]

Publisher: Cognition, Link>

ABSTRACT

The importance of proportional reasoning has long been recognized by psychologists and educators, yet we still do not have a good understanding of how humans mentally represent proportions. In this paper we present a psychophysical model of proportion estimation, extending previous approaches. We assumed that proportion representations are formed by representing each magnitude of a proportion stimuli (the part and its complement) as Gaussian activations in the mind, which are then mentally combined in the form of a proportion. We next derived the internal representation of proportions, including bias and internal noise parameters -capturing respectively how our estimations depart from true values and how variable estimations are. Methodologically, we introduced a mixture of components to account for contaminating behaviors (guessing and reversal of responses) and framed the model in a hierarchical way. We found empirical support for the model by testing a group of 4th grade children in a spatial proportion estimation task. In particular, the internal density reproduced the asymmetries (skewedness) seen in this and in previous reports of estimation tasks, and the model accurately described wide variations between subjects in behavior. Bias estimates were in general smaller than by using previous approaches, due to the model's capacity to absorb contaminating behaviors. This property of the model can be of especial relevance for studies aimed at linking psychophysical measures with broader cognitive abilities. We also recovered higher levels of noise than those reported in discrimination of spatial magnitudes and discuss possible explanations for it. We conclude by illustrating a concrete application of our model to study the effects of scaling in proportional reasoning, highlighting the value of quantitative models in this field of research.


[:es]Publisher: Frontiers in Neuroscience, Link>

ABSTRACT

Hippocampal-dependent memories emerge late during postnatal development, aligning with hippocampal maturation. During sleep, the two-stage memory formation model states that through hippocampal-neocortical interactions, cortical slow-oscillations (SO), thalamocortical Spindles, and hippocampal sharp-wave ripples (SWR) are synchronized, allowing for the consolidation of hippocampal-dependent memories. However, evidence supporting this hypothesis during development is still lacking. Therefore, we performed successive object-in-place tests during a window of memory emergence and recorded in vivo the occurrence of SO, Spindles, and SWR during sleep, immediately after the memory encoding stage of the task. We found that hippocampal-dependent memory emerges at the end of the 4th postnatal week independently of task overtraining. Furthermore, we observed that those animals with better performance in the memory task had increased Spindle density and duration and lower density of SWR. Moreover, we observed changes in the SO-Spindle and Spindle-SWR temporal-coupling during this developmental period. Our results provide new evidence for the onset of hippocampal-dependent memory and its relationship to the oscillatory phenomenon occurring during sleep that helps us understand how memory consolidation models fit into the early stages of postnatal development.

[:]

[:es]Publisher: The European journal of neuroscience Link>

ABSTRACT

It is widely accepted that the brain, like any other physical system, is subjected to physical constraints that restrict its operation. The brain's metabolic demands are particularly critical for proper neuronal function, but the impact of these constraints continues to remain poorly understood. Detailed single-neuron models are recently integrating metabolic constraints, but these models’ computational resources make it challenging to explore the dynamics of extended neural networks, which are governed by such constraints. Thus, there is a need for a simplified neuron model that incorporates metabolic activity and allows us to explore the dynamics of neural networks. This work introduces an energy-dependent leaky integrate-and-fire (EDLIF) neuronal model extension to account for the effects of metabolic constraints on the single-neuron behavior. This simple, energy-dependent model could describe the relationship between the average firing rate and the Adenosine triphosphate (ATP) cost as well as replicate a neuron's behavior under a clinical setting such as amyotrophic lateral sclerosis (ALS). Additionally, EDLIF model showed better performance in predicting real spike trains – in the sense of spike coincidence measure – than the classical leaky integrate-and-fire (LIF) model. The simplicity of the energy-dependent model presented here makes it computationally efficient and, thus, suitable for studying the dynamics of large neural networks.

[:]

Publisher: Scientific Reports, Link>

ABSTRACT

Before the 6-months of age, infants succeed to learn words associated with objects and actions when the words are presented isolated or embedded in short utterances. It remains unclear whether such type of learning occurs from fluent audiovisual stimuli, although in natural environments the fluent audiovisual contexts are the default. In 4 experiments, we evaluated if 8-month-old infants could learn word-action and word-object associations from fluent audiovisual streams when the words conveyed either vowel or consonant harmony, two phonological cues that benefit word learning near 6 and 12 months of age, respectively. We found that infants learned both types of words, but only when the words contained vowel harmony. Because object- and action-words have been conceived as rudimentary representations of nouns and verbs, our results suggest that vowels contribute to shape the initial steps of the learning of lexical categories in preverbal infants.


[:es]Publisher: Scientific Reports, Link>

ABSTRACT

In natural vision, neuronal responses to visual stimuli occur due to self-initiated eye movements. Here, we compare single-unit activity in the primary visual cortex (V1) of non-human primates to flashed natural scenes (passive vision condition) to when they freely explore the images by self-initiated eye movements (active vision condition). Active vision enhances the number of neurons responding, and the response latencies become shorter and less variable across neurons. The increased responsiveness and shortened latency during active vision were not explained by increased visual contrast. While the neuronal activities in all layers of V1 show enhanced responsiveness and shortened latency, a significant increase in lifetime sparseness during active vision is observed only in the supragranular layer. These findings demonstrate that the neuronal responses become more distinct in active vision than passive vision, interpreted as consequences of top-down predictive mechanisms.

[:]

Publisher: PNAS, Link>

ABSTRACT

While there is increasing acceptance that even young infants detect correspondences between heard and seen speech, the common view is that oral-motor movements related to speech production cannot influence speech perception until infants begin to babble or speak. We investigated the extent of multimodal speech influences on auditory speech perception in prebabbling infants who have limited speech-like oral-motor repertoires. We used event-related potentials (ERPs) to examine how sensorimotor influences to the infant’s own articulatory movements impact auditory speech perception in 3-mo-old infants. In experiment 1, there were ERP discriminative responses to phonetic category changes across two phonetic contrasts (bilabial–dental /ba/-/ɗa/; dental–retroflex /ɗa/-/ɖa/) in a mismatch paradigm, indicating that infants auditorily discriminated both contrasts. In experiment 2, inhibiting infants’ own tongue-tip movements had a disruptive influence on the early ERP discriminative response to the /ɗa/-/ɖa/ contrast only. The same articulatory inhibition had contrasting effects on the perception of the /ba/-/ɗa/ contrast, which requires different articulators (the lips vs. the tongue) during production, and the /ɗa/-/ɖa/ contrast, whereby both phones require tongue-tip movement as a place of articulation. This articulatory distinction between the two contrasts plausibly accounts for the distinct influence of tongue-tip suppression on the neural responses to phonetic category change perception in definitively prebabbling, 3-mo-old, infants. The results showing a specificity in the relation between oral-motor inhibition and phonetic speech discrimination suggest a surprisingly early mapping between auditory and motor speech representation already in prebabbling infants.


[:es]Publisher: Springer Series in Computational Neuroscience, Link>

ABSTRACT

In Chap. 11, Pedro Maldonado describes his joint work with his PhD advisor, George Gerstein, demonstrating plasticity in receptive field properties, neuronal interactions, and network dynamics in the rat auditory cortex upon electrical intracortical microstimulation.

[:]

Publisher: Cognition, Link>

ABSTRACT

One of the prominent ideas developed by Jacques Mehler and his colleagues was that perceptual tuning, present from birth on, enables infants, and language learners in general, to extract regularities from speech input. Here we discuss language learners'' ability to extract basic word order (VO or OV) structure from prosodic regularities in a language. The two are closely related: in phonological phrases of VO languages, the most prominent word is the rightmost one, and in OV languages, it is the leftmost one. In speech, this prominence is realized as extended duration, or as elevated pitch, sometimes combined with changes in intensity. When learning the first (L1) or the second language (L2), exposure to relevant rhythmic structure elicits implicit learning about syntactic structure, including the basic word order. However, it remains unclear whether triggering the learning process requires a certain level of familiarity with the relevant rhythm. It is moreover unknown whether prosodic information can help L2 learners to extract and learn the vocabulary of a new language. We tested Spanish- and Italian-speaking adults' ability to learn words from an artificial language with either non-native OV or native VO word order. The results show that learners used prosodic information to identify the most prominent words in short utterances when the artificial language was similar to the native language, with duration-based prominence in prosody and a VO word order. In contrast, when the artificial language had a non-native prominence marked by pitch alternations and an OV word order, prominent words were learned only after a three-day exposure to the relevant rhythmic structure. Thus, for adult L2 learners, only repeated exposure to the relevant prosody elicited learning new words from an unknown language with non-native prosodic marking, indicating that, with familiarity, prosodic cues can facilitate learning in L2.


[:es]Publisher: Frontiers in Systems Neuroscience Link>

ABSTRACT

It is still elusive to explain the emergence of behavior and understanding based on its neural mechanisms. One renowned proposal is the Free Energy Principle (FEP), which uses an information-theoretic framework derived from thermodynamic considerations to describe how behavior and understanding emerge. FEP starts from a whole-organism approach, based on mental states and phenomena, mapping them into the neuronal substrate. An alternative approach, the Energy Homeostasis Principle (EHP), initiates a similar explanatory effort but starts from single-neuron phenomena and builds up to whole-organism behavior and understanding. In this work, we further develop the EHP as a distinct but complementary vision to FEP and try to explain how behavior and understanding would emerge from the local requirements of the neurons. Based on EHP and a strict naturalist approach that sees living beings as physical and deterministic systems, we explain scenarios where learning would emerge without the need for volition or goals. Given these starting points, we state several considerations of how we see the nervous system, particularly the role of the function, purpose, and conception of goal-oriented behavior. We problematize these conceptions, giving an alternative teleology-free framework in which behavior and, ultimately, understanding would still emerge. We reinterpret neural processing by explaining basic learning scenarios up to simple anticipatory behavior. Finally, we end the article with an evolutionary perspective of how this non-goal-oriented behavior appeared. We acknowledge that our proposal, in its current form, is still far from explaining the emergence of understanding. Nonetheless, we set the ground for an alternative neuron-based framework to ultimately explain understanding.

[:]

agencia nacional de investigación y desarrollo
Edificio de Innovación UC, Piso 2
Vicuña Mackenna 4860
Macul, Chile