MMCA
Multimodal Collaboration Analytics (MMCA) - A Literature Review
- Ainley, M., Corrigan, M., & Richardson, N. (2005). Students, tasks and emotions: Identifying the contribution of emotions to students’ reading of popular culture and popular science texts. Learning and Instruction, 15(5), 433–447.
- Andrist, S., Ruis, A. R., & Shaffer, D. W. (2018). A network analytic approach to gaze coordination during a collaborative task. Computers in Human Behavior, 89, 339–348. https://doi.org/10.1016/j.chb.2018.07.017
- Bachour, K., Kaplan, F., & Dillenbourg, P. (2010). An Interactive Table for Supporting Participation Balance in Face-to-Face Collaborative Learning. IEEE Transactions on Learning Technologies, 3(3), 203–213. https://doi.org/10.1109/TLT.2010.18
- Barry, B., & Stewart, G. (1997a). Composition, process, and performance in self-managed groups: The role of personality. Journal of Applied Psychology, 82(1), 62.
- Barry, B., & Stewart, G. L. (1997b). Composition, process and performance in self-managed groups: The role of personality. Journal of Applied Psychology, 82(1), 62.
- Bassiou, N., Tsiartas, A., Smith, J., Bratt, H., Richey, C., Shriberg, E., D’Angelo, C., & Alozie, N. (2016). Privacy-Preserving Speech Analytics for Automatic Assessment of Student Collaboration. INTERSPEECH, 888–892.
- Behoora, I., & Tucker, C. S. (2015). Machine learning classification of design team members’ body language patterns for real time emotional state detection. Design Studies, 39, 100–127. https://doi.org/10.1016/j.destud.2015.04.003
- Beyan, C., Carissimi, N., Capozzi, F., Vascon, S., Bustreo, M., Pierro, A., Becchio, C., & Murino, V. (2016). Detecting emergent leader in a meeting environment using nonverbal visual features only. Proceedings of the 18th ACM International Conference on Multimodal Interaction - ICMI 2016, 317–324. https://doi.org/10.1145/2993148.2993175
- Beyan, C., Katsageorgiou, V.-M., & Murino, V. (2017). Moving as a Leader: Detecting Emergent Leadership in Small Groups using Body Pose. Proceedings of the 2017 ACM on Multimedia Conference - MM ’17, 1425–1433. https://doi.org/10.1145/3123266.3123404
- Bhattacharya, I., Foley, M., Zhang, N., Zhang, T., Ku, C., Mine, C., Ji, H., Riedl, C., Welles, B. F., & Radke, R. J. (2018). A multimodal-sensor-enabled room for unobtrusive group meeting analysis. Proceedings of the 20th ACM International Conference on Multimodal Interaction, 347–355.
- Chikersal, P., Tomprou, M., Kim, Y. J., Woolley, A. W., & Dabbish, L. (2017). Deep Structures of Collaboration: Physiological Correlates of Collective Intelligence and Group Satisfaction. Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing, 873–888. https://doi.org/10.1145/2998181.2998250
- Chng, E., Seyam, M. R., Yao, W., & Schneider, B. (2020). Using Motion Sensors to Understand Collaborative Interactions in Digital Fabrication Labs. In I. I. Bittencourt, M. Cukurova, K. Muldner, R. Luckin, & E. Millán (Eds.), Artificial Intelligence in Education (Vol. 12163, pp. 118–128). Springer International Publishing. https://doi.org/10.1007/978-3-030-52237-7_10
- Cukurova, M., Luckin, R., Millán, E., & Mavrikis, M. (2018). The NISPI framework: Analysing collaborative problem-solving from students’ physical interactions. Computers & Education, 116, 93–109.
- Dale, R., Bryant, G. A., Manson, J. H., & Gervais, M. M. (n.d.). Body synchrony in triadic interaction. Royal Society Open Science, 7(9), 200095. https://doi.org/10.1098/rsos.200095
- Damon, W., & Phelps, E. (1989). Critical distinctions among three approaches to peer education. International Journal of Educational Research, 13(1), 9–19.
- D’Angelo, S., & Begel, A. (2017). Improving communication between pair programmers using shared gaze awareness. Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, 6245–6290.
- D’Angelo, S., & Gergle, D. (2018). An eye for design: Gaze visualizations for remote collaborative work. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 1–12.
- D’Angelo, S., & Gergle, D. (2016). Gazed and confused: Understanding and designing shared gaze for remote collaboration. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 2492–2496.
- Dich, Y., Reilly, J., & Schneider, B. (2018). Using physiological synchrony as an indicator of collaboration quality, task performance and learning. International Conference on Artificial Intelligence in Education, 98–110.
- Dikker, S., Wan, L., Davidesco, I., Kaggen, L., Oostrik, M., McClintock, J., Rowland, J., Michalareas, G., Van Bavel, J. J., Ding, M., & Poeppel, D. (2017). Brain-to-Brain Synchrony Tracks Real-World Dynamic Group Interactions in the Classroom. Current Biology, 27(9), 1375–1380. https://doi.org/10.1016/j.cub.2017.04.002
- Dindar, M., Järvelä, S., & Haataja, E. (2020). What does physiological synchrony reveal about metacognitive experiences and group performance? British Journal of Educational Technology, 51(5), 1577–1596. https://doi.org/10.1111/bjet.12981
- Efklides, A., Papadaki, M., Papantoniou, G., & Kiosseoglou, G. (1998). Individual differences in feelings of difficulty: The case of school mathematics. European Journal of Psychology of Education, 13(2), 207–226.
- Eloy, L., Stewart, A. E. B., Amon, M. J., Reinhardt, C., Michaels, A., Chen, S., Shute, V., Duran, N. D., & D’Mello, S. K. (2019). Modeling Team-level Multimodal Dynamics during Multiparty Collaboration. 2019 International Conference on Multimodal Interaction, 244–258. https://doi.org/10.1145/3340555.3353748
- Erkens, G., Jaspers, J., Prangsma, M., & Kanselaar, G. (n.d.). Computers in Human Behavior. Computers in Human Behavior, 21(3), 463–486.
- Evans, A. C., Wobbrock, J. O., & Davis, K. (2016). Modeling Collaboration Patterns on an Interactive Tabletop in a Classroom Setting. Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing - CSCW ’16, 858–869. https://doi.org/10.1145/2818048.2819972
- Fang, S., Achard, C., & Dubuisson, S. (2016). Personality classification and behaviour interpretation: An approach based on feature categories. Proceedings of the 18th ACM International Conference on Multimodal Interaction - ICMI 2016, 225–232. https://doi.org/10.1145/2993148.2993201
- Gergle, D., & Clark, A. T. (2011). See what I’m saying? Using dyadic mobile eye tracking to study collaborative reference. Proceedings of the ACM 2011 Conference on Computer Supported Cooperative Work, 435–444.
- Grafsgaard, J. F., Wiggins, J. B., Vail, A. K., Boyer, K. E., Wiebe, E. N., & Lester, J. C. (2014a). The additive value of multimodal features for predicting engagement, frustration, and learning during tutoring. Proceedings of the 16th International Conference on Multimodal Interaction, 42–49.
- Grafsgaard, J. F., Wiggins, J. B., Vail, A. K., Boyer, K. E., Wiebe, E. N., & Lester, J. C. (2014b). The Additive Value of Multimodal Features for Predicting Engagement, Frustration, and Learning during Tutoring. Proceedings of the 16th International Conference on Multimodal Interaction, 42–49.
- Hadwin, A. F., & Webster, E. A. (2013). Calibration in goal setting: Examining the nature of judgments of confidence. Learning and Instruction, 24, 37–47.
- Higuch, K., Yonetani, R., & Sato, Y. (2016). Can Eye Help You?: Effects of Visualizing Eye Fixations on Remote Collaboration Scenarios for Physical Tasks. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 5180–5190. https://doi.org/10.1145/2858036.2858438
- Hung, H., Jayagopi, D. B., Ba, S., Odobez, J.-M., & Gatica-Perez, D. (2008). Investigating automatic dominance estimation in groups from visual attention and speaking activity. Proceedings of the 10th International Conference on Multimodal Interfaces, 233–236. https://doi.org/10.1145/1452392.1452441
- Järvelä, S., Kivikangas, J. M., Kätsyri, J., & Ravaja, N. (2014). Physiological Linkage of Dyadic Gaming Experience. Simulation & Gaming, 45(1), 24–40. https://doi.org/10.1177/1046878113513080
- Jayagopi, D., Sanchez-Cortes, D., Otsuka, K., Yamato, J., & Gatica-Perez, D. (2012). Linking speaking and looking behavior patterns with group composition, perception, and performance. Proceedings of the 14th ACM International Conference on Multimodal Interaction, 433–440.
- Jermann, P., & Nüssli, M.-A. (2012). Effects of sharing text selections on gaze cross-recurrence and interaction quality in a pair programming task. Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work, 1125–1134. https://doi.org/10.1145/2145204.2145371
- Joann, K., & Wall, V. D. (1989). SYMLOG: Theory and method for measuring group and organizational communication. Management Communication Quarterly, 2.4, 544–567.
- Kantharaju, R. B., Ringeval, F., & Besacier, L. (2018). Automatic Recognition of Affective Laughter in Spontaneous Dyadic Interactions from Audiovisual Signals. Proceedings of the 2018 on International Conference on Multimodal Interaction - ICMI ’18, 220–228. https://doi.org/10.1145/3242969.3243012
- Kelly, J. R., & Barsade, S. (n.d.). Mood and emotions in small groups and work teams. Organizational Behavior and Human Decision Processes, 86(1), 99–130.
- Kelly, J. R., & Barsade, S. (2001). Mood and emotions in small groups and work teams. Organizational Behavior and Human Decision Processes, 86(1), 99–130.
- KÜTT, G. H., TANPRASERT, T., RODOLITZ, J., MOYZA, B., SO, S., KENDEROVA, G., & PAPOUTSAKI, A. (2020). Effects of Shared Gaze on Audio-Versus Text-Based Remote Collaborations. 4, 25. https://doi.org/10.1145/ 3415207
- Le Dantec, C. A., & Do, E. Y.-L. (2009). The mechanisms of value transfer in design meetings. Design Studies, 30(2), 119–137.
- Le Dantec, C., & Yi-Luen Do, E. (2009). The mechanisms of value transfer in design meetings. Design Studies, 30(2), 119–137.
- Lepri, B., Subramanian, R., Kalimeri, K., Staiano, J., Pianesi, F., & Sebe, N. (2010). Employing social gaze and speaking activity for automatic determination of the extraversion trait. International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, 1–8.
- Li, W., Nüssli, M.-A., & Jermann, P. (2010a). Gaze quality assisted automatic recognition of social contexts in collaborative Tetris. International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, 1–8.
- Li, W., Nüssli, M.-A., & Jermann, P. (2010b). Gaze quality assisted automatic recognition of social contexts in collaborative Tetris. International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, 1–8.
- Lin, Y.-S., & Lee, C.-C. (2018). Using Interlocutor-Modulated Attention BLSTM to Predict Personality Traits in Small Group Interaction. Proceedings of the 2018 on International Conference on Multimodal Interaction - ICMI ’18, 163–169. https://doi.org/10.1145/3242969.3243001
- Liu, Y., Wang, T., Wang, K., & Zhang, Y. (2020). Predicting Collaborative Learning Quality through Physiological Synchrony Recorded by Wearable Biosensors. BioRxiv, 2020.06.01.127449. https://doi.org/10.1101/2020.06.01.127449
- Lord, R., Foti, R., & Vader, C. D. (1984). A test of leadership categorization theory: Internal structure, information processing, and leadership perceptions. Organizational Behavior and Human Performance, 34(3), 343–378.
- Lubold, N., & Pon-Barry, H. (2014). Acoustic-Prosodic Entrainment and Rapport in Collaborative Learning Dialogues. Proceedings of the 2014 ACM Workshop on Multimodal Learning Analytics Workshop and Grand Challenge - MLA ’14, 5–12. https://doi.org/10.1145/2666633.2666635
- Luz, S. (2013). Automatic identification of experts and performance prediction in the multimodal math data corpus through analysis of speech interaction. Proceedings of the 15th ACM on International Conference on Multimodal Interaction - ICMI ’13, 575–582. https://doi.org/10.1145/2522848.2533788
- Malmberg, J., Haataja, E., Seppänen, T., & Järvelä, S. (2019). Are we together or not? The temporal interplay of monitoring, physiological arousal and physiological synchrony during a collaborative exam. International Journal of Computer-Supported Collaborative Learning, 14(4), 467–490. https://doi.org/10.1007/s11412-019-09311-4
- Malmberg, J., Järvelä, S., Holappa, J., Haataja, E., Huang, X., & Siipo, A. (2019). Going beyond what is visible: What multichannel data can reveal about interaction in the context of collaborative learning? Computers in Human Behavior, 96, 235–245.
- Manson, J. H., Gregory, B. A., Gervais, M. M., & Kline, M. A. (2013). Convergence of speech rate in conversation predicts cooperation. Evolution and Human Behavior, 34(6), 419–426.
- Martinez, R., Wallace, J. R., Kay, J., & Yacef, K. (2011). Modelling and Identifying Collaborative Situations in a Collocated Multi-display Groupware Setting. In G. Biswas, S. Bull, J. Kay, & A. Mitrovic (Eds.), Artificial Intelligence in Education (Vol. 6738, pp. 196–204). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-21869-9_27
- Martinez, R., Yacef, K., & Kay, J. (2011). Analysing frequent sequential patterns of collaborative learning activity around an interactive tabletop. 10.
- Martinez-Maldonado, R., Dimitriadis, Y., Martinez-Monés, A., Kay, J., & Yacef, K. (2013). Capturing and analyzing verbal and physical collaborative learning interactions at an enriched interactive tabletop. International Journal of Computer-Supported Collaborative Learning, 8(4), 455–485. https://doi.org/10.1007/s11412-013-9184-1
- Martinez-Maldonado, R., Kay, J., & Yacef, K. (2013). An Automatic Approach for Mining Patterns of Collaboration around an Interactive Tabletop. In H. C. Lane, K. Yacef, J. Mostow, & P. Pavlik (Eds.), Artificial Intelligence in Education (Vol. 7926, pp. 101–110). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-39112-5_11
- McDuff, D., Thomas, P., Czerwinski, M., & Craswell, N. (2017). Multimodal analysis of vocal collaborative search: A public corpus and results. Proceedings of the 19th ACM International Conference on Multimodal Interaction, 456–463.
- Miura, G., & Okada, S. (2019). Task-independent Multimodal Prediction of Group Performance Based on Product Dimensions. 2019 International Conference on Multimodal Interaction, 264–273.
- Mønster, D., Håkonsson, D. D., Eskildsen, J. K., & Wallot, S. (2016). Physiological evidence of interpersonal dynamics in a cooperative production task. Physiology & Behavior, 156, 24–34. https://doi.org/10.1016/j.physbeh.2016.01.004
- Montague, E., Xu, J., & Chiou, E. (2014). Shared Experiences of Technology and Trust: An Experimental Study of Physiological Compliance Between Active and Passive Users in Technology-Mediated Collaborative Encounters. IEEE Transactions on Human-Machine Systems, 44(5), 614–624. https://doi.org/10.1109/THMS.2014.2325859
- Muller, P. M., & Bulling, A. (2019). Emergent Leadership Detection Across Datasets. 2019 International Conference on Multimodal Interaction, 274–278.
- Murray, G., & Oertel, C. (2018). Predicting Group Performance in Task-Based Interaction. Proceedings of the 20th ACM International Conference on Multimodal Interaction, 14–20. https://doi.org/10.1145/3242969.3243027
- Nakano, Y. I., Nihonyanagi, S., Takase, Y., Hayashi, Y., & Okada, S. (2015). Predicting participation styles using co-occurrence patterns of nonverbal behaviors in collaborative learning. Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, 91–98.
- Ochoa, X., Chiluiza, K., Méndez, G., Luzardo, G., Guamán, B., & Castells, J. (2013). Expertise estimation based on simple multimodal features. Proceedings of the 15th ACM on International Conference on Multimodal Interaction - ICMI ’13, 583–590. https://doi.org/10.1145/2522848.2533789
- Ogan, A., Finkelstein, S., Walker, E., Carlson, R., & Cassell, J. (2012a). Rudeness and rapport: Insults and learning gains in peer tutoring. International Conference on Intelligent Tutoring Systems, 11–21.
- Ogan, A., Finkelstein, S., Walker, E., Carlson, R., & Cassell, J. (2012b). Rudeness and rapport: Insults and learning gains in peer tutoring. International Conference on Intelligent Tutoring Systems, 11–21.
- Okada, S., Aran, O., & Gatica-Perez, D. (2015). Personality Trait Classification via Co-Occurrent Multiparty Multimodal Event Discovery. Proceedings of the 2015 ACM on International Conference on Multimodal Interaction - ICMI ’15, 15–22. https://doi.org/10.1145/2818346.2820757
- Okada, S., Ohtake, Y., Nakano, Y. I., Hayashi, Y., Huang, H.-H., Takase, Y., & Nitta, K. (2016). Estimating communication skills using dialogue acts and nonverbal features in multiple discussion datasets. Proceedings of the 18th ACM International Conference on Multimodal Interaction, 169–176.
- Olsen, J., Sharma, K., Aleven, V., & Rummel, N. (2018). Combining gaze, dialogue, and action from a collaborative intelligent tutoring system to inform student learning processes. ICLS 2018 Proceedings.
- Paas, F. G. (1992). Training strategies for attaining transfer of problem-solving skill in statistics: A cognitive-load approach. Journal of Education Psychology, 84(4), 429.
- Pijeira-Díaz, H. J., Drachsler, H., Järvelä, S., & Kirschner, P. A. (2019). Sympathetic arousal commonalities and arousal contagion during collaborative learning: How attuned are triad members? Computers in Human Behavior, 92, 188–197. https://doi.org/10.1016/j.chb.2018.11.008
- Pijeira-Díaz, H. J., Drachsler, H., Järvelä, S., & Kirschner, P. A. (2016). Investigating collaborative learning success with physiological coupling indices based on electrodermal activity. Proceedings of the Sixth International Conference on Learning Analytics & Knowledge, 64–73.
- Pintrich, P. (1991). A manual for the use of the Motivated Strategies for Learning Questionnaire (MSLQ).
- Ponce-López, V., Escalera, S., & Baró, X. (2013). Multi-modal social signal analysis for predicting agreement in conversation settings. Proceedings of the 15th ACM on International Conference on Multimodal Interaction, 495–502. https://doi.org/10.1145/2522848.2532594
- Reilly, J. M., Ravenell, M., & Schneider, B. (2018). Exploring Collaboration Using Motion Sensors and Multi-Modal Learning Analytics. International Educational Data Mining Society.
- Reilly, J. M., & Schneider, B. (2019). Predicting the Quality of Collaborative Problem Solving Through Linguistic Analysis of Discourse. 9.
- Riquelme, F., Munoz, R., Mac Lean, R., Villarroel, R., Barcelos, T. S., & de Albuquerque, V. H. C. (2019). Using multimodal learning analytics to study collaboration on discussion groups: A social network approach. Universal Access in the Information Society, 18(3), 633–643. https://doi.org/10.1007/s10209-019-00683-w
- Samrose, S., Zhao, R., White, J., Li, V., Nova, L., Lu, Y., Ali, M. R., & Hoque, M. E. (2018). Coco: Collaboration coach for understanding team dynamics during video conferencing. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 1(4), 1–24.
- Sanchez-Cortes, D., Aran, O., Jayagopi, D. B., Mast, M. S., & Gatica-Perez, D. (2013a). Emergent leaders through looking and speaking: From audio-visual data to multimodal recognition. Journal on Multimodal User Interfaces, 1–2(7), 39–53.
- Sanchez-Cortes, D., Aran, O., Jayagopi, D. B., Mast, M. S., & Gatica-Perez, D. (2013b). Emergent leaders through looking and speaking: From audio-visual data to multimodal recognition. Journal of Multimodal User Interfaces, 7.1, 39–53.
- Scherer, S., Weibel, N., Morency, L.-P., & Oviatt, S. (2012). Multimodal prediction of expertise and leadership in learning groups. Proceedings of the 1st International Workshop on Multimodal Learning Analytics, 1–8.
- Schlösser, C., Schlieker-Steens, P., Kienle, A., & Harrer, A. (2015). Using Real-Time Gaze Based Awareness Methods to Enhance Collaboration. In N. Baloian, Y. Zorian, P. Taslakian, & S. Shoukouryan (Eds.), Collaboration and Technology (pp. 19–27). Springer International Publishing. https://doi.org/10.1007/978-3-319-22747-4_2
- Schneider, B. (2019). Unpacking collaborative learning processes during hands-on activities using mobile eye-trackers. International Conference on Computer Supported Collaborative Learning.
- Schneider, B., & Blikstein, P. (2015). Unraveling Students’ Interaction Around a Tangible Interface Using Multimodal Learning Analytics. 7(3), 28.
- Schneider, B., Dich, Y., & Radu, I. (2020). Unpacking the Relationship between Existing and New Measures of Physiological Synchrony and Collaborative Learning: A Mixed Methods Study. International Journal of Computer-Supported Collaborative Learning, 15(1), 89–113. https://doi.org/10.1007/s11412-020-09318-2
- Schneider, B., & Pea, R. (2013a). Real-time mutual gaze perception enhances collaborative learning and collaboration quality. International Journal of Computer-Supported Collaborative Learning, 8(4), 375–397.
- Schneider, B., & Pea, R. (2014a). Toward collaboration sensing. International Journal of Computer-Supported Collaborative Learning, 9(4), 371–395. https://doi.org/10.1007/s11412-014-9202-y
- Schneider, B., & Pea, R. (2015). Does Seeing One Another’s Gaze Affect Group Dialogue? A Computational Approach. Journal of Learning Analytics, 2(2), 107–133. https://doi.org/10.18608/jla.2015.22.9
- Schneider, B., & Pea, R. (2014b). The Effect of Mutual Gaze Perception on Students’ Verbal Coordination. 7.
- Schneider, B., & Pea, R. (2013b). Using Eye-Tracking Technology to Support Visual Coordination in Collaborative Problem-Solving Groups. CSCL 2013 Proceedings, 1, 8.
- Schneider, B., Sharma, K., Cuendet, S., Zufferey, G., Dillenbourg, P., & Pea, R. (2016a). Detecting Collaborative Dynamics Using Mobile Eye-Trackers. International Society of the Learning Sciences, 8.
- Schneider, B., Sharma, K., Cuendet, S., Zufferey, G., Dillenbourg, P., & Pea, R. (2016b). Using Mobile Eye-Trackers to Unpack the Perceptual Benefits of a Tangible User Interface for Collaborative Learning. ACM Transactions on Computer-Human Interaction, 23(6), 39:1-39:23. https://doi.org/10.1145/3012009
- Schneider, B., Sharma, K., Cuendet, S., Zufferey, G., Dillenbourg, P., & Pea, R. (2018). Leveraging mobile eye-trackers to capture joint visual attention in co-located collaborative learning groups. International Journal of Computer-Supported Collaborative Learning, 13(3), 241–261.
- Schneider, B., Sharma, K., Cuendet, S., Zufferey, G., Dillenbourg, P., & Pea, R. D. (2015). 3D tangibles facilitate joint visual attention in dyads. International Society of the Learning Sciences, Inc.[ISLS].
- Sharma, K., Caballero, D., Verma, H., Jermann, P., & Dillenbourg, P. (2015). Looking AT versus looking THROUGH: A dual eye-tracking study in MOOC context. International Society of the Learning Sciences, Inc.[ISLS].
- Sharma, K., Jermann, P., Nüssli, M.-A., & Dillenbourg, P. (2013). Understanding Collaborative Program Comprehension: Interlacing Gaze and Dialogues. https://repository.isls.org//handle/1/1944
- Sharma, K., Leftheriotis, I., Noor, J., & Giannakos, M. (2017). Dual Gaze as a Proxy for Collaboration in Informal Learning. https://repository.isls.org//handle/1/230
- Sharma, K., & Olsen, J. (2019). An Alternate Statistical Lens to Look at Collaboration Data: Extreme Value Theory. International Conference on Computer Supported Collaborative Learning, 400–407.
- Spikol, D., Ruffaldi, E., & Cukurova, M. (2017). Using Multimodal Learning Analytics to Identify Aspects of Collaboration in Project-Based Learning. 8.
- Spikol, D., Ruffaldi, E., Dabisias, G., & Cukurova, M. (2018). Supervised machine learning in multimodal learning analytics for estimating success in project‐based learning. Journal of Computer Assisted Learning, 366–377.
- Spikol, D., Ruffaldi, E., Landolfi, L., & Cukurova, M. (2017). Estimation of success in collaborative learning based on multimodal learning analytics features. 2017 IEEE 17th International Conference on Advanced Learning Technologies (ICALT), 269–273.
- Sriramulu, A., Lin, J., & Oviatt, S. (2019). Dynamic Adaptive Gesturing Predicts Domain Expertise in Mathematics. 2019 International Conference on Multimodal Interaction, 105–113. https://doi.org/10.1145/3340555.3353726
- Starr, E. L., Reilly, J. M., & Schneider, B. (2018). Toward Using Multi-Modal Learning Analytics to Support and Measure Collaboration in Co-Located Dyads. International Society of the Learning Sciences, Inc.[ISLS.
- Stewart, A. E. B., Keirn, Z. A., & D’Mello, S. K. (2018). Multimodal Modeling of Coordination and Coregulation Patterns in Speech Rate during Triadic Collaborative Problem Solving. Proceedings of the 2018 on International Conference on Multimodal Interaction - ICMI ’18, 21–30. https://doi.org/10.1145/3242969.3242989
- Stroebe, W., & Strack, F. (2014). The alleged crisis and the illusion of exact replication. Perspectives on Psychological Science, 9(1), 59–71.
- Tapola, A., Veermans, M., & Niemivirta, M. (2013). Predictors and outcomes of situational interest during a science learning tasks. Instructional Science, 41(6), 1047–1064.
- Terken, J., & Sturm, J. (2010). Multimodal support for social dynamics in co-located meetings. Personal and Ubiquitous Computing, 14(8), 703–714.
- Viswanathan, S. A., & Vanlehn, K. (2017). High accuracy detection of collaboration from log data and superficial speech features. International Society of the Learning Sciences.
- Viswanathan, S. A., & VanLehn, K. (2017). Using the tablet gestures and speech of pairs of students to classify their collaboration. IEEE Transactions on Learning Technologies, 11(2), 230–242.
- Vrzakova, H., Amon, M. J., Stewart, A., Duran, N. D., & D’Mello, S. K. (2020). Focused or stuck together: Multimodal patterns reveal triads’ performance in collaborative problem solving. Proceedings of the Tenth International Conference on Learning Analytics & Knowledge, 295–304.
- Vrzakova, H., Amon, M. J., Stewart, A. E., & D’Mello, S. K. (2019). Dynamics of visual attention in multiparty collaborative problem solving using multidimensional recurrence quantification analysis. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–14.
- Winne, P. H. (2010). Improving measurements of self-regulated learning. Educational Psychologist, 45(4), 267–276.
- Worsley, M. (2018). (Dis)Engagement Matters: Identifying Efficacious Learning Practices with Multimodal Learning Analytics. 5.
- Xie, B., Reilly, J. M., Dich, Y. L., & Schneider, B. (2018). Augmenting Qualitative Analyses of Collaborative Learning Groups Through Multi-Modal Sensing. International Society of the Learning Sciences, Inc.[ISLS].
- Yamashita, N., Kaji, K., Kuzuoka, H., & Hirata, K. (2011). Improving visibility of remote gestures in distributed tabletop collaboration. Proceedings of the ACM 2011 Conference on Computer Supported Cooperative Work, 95–104.
- Yoon, D., Chen, N., Randles, B., Cheatle, A., Löckenhoff, C. E., Jackson, S. J., Sellen, A., & Guimbretière, F. (2016). RichReview++ Deployment of a Collaborative Multi-modal Annotation System for Instructor Feedback and Peer Discussion. Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing, 195–205.
- Yvonne A. W. de Kort, IJsselsteijn, W. A., & Poels, K. (2007). Digital games as social presence technology: Development of the Social Presence in Gaming Questionnaire (SPGQ). Proceedings of PRESENCE, 1–9.