The goal of this PhD thesis is to define a joint computational model of complementary cognitive functions: 1) perception and learning of artificial language, and 2) the impact of social stimuli on them using facial expressions of virtual agents. A model will be defined based on Smolensky’s framework. Several measures of computational complexity will be developed to study these specific cognitive functions and their interactions. These will be employed as a way to build evidence for the accuracy of the models by employing neuroimaging techniques during brain studies, and behavioral studies. This project will not only advance the current knowledge in the considered cognitive functions and their interaction but will create a precedent for more ambitious integrative models of cognition and emotion. It will support neuroscience and social computing efforts in realizing studies in more realistic contexts in human-human and human-machine interactions.