IX Lab Publications

Note: list of publications is generated automatically with Zotpress. Thanks to Katie Seaborn for the useful plug-in!

Clicking on “(CITE” will upload citation to your bibliography management tool or allow you to save a .RIS file. 


1679373 1 apa 50 date desc year 1 1 1830 https://ixlab.us/wp-content/plugins/zotpress/
%7B%22status%22%3A%22success%22%2C%22updateneeded%22%3Afalse%2C%22instance%22%3Afalse%2C%22meta%22%3A%7B%22request_last%22%3A4300%2C%22request_next%22%3A50%2C%22used_cache%22%3Atrue%7D%2C%22data%22%3A%5B%7B%22key%22%3A%22UX89WS45%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Arranz-Romero%20et%20al.%22%2C%22parsedDate%22%3A%222026-03-04%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BArranz-Romero%2C%20J.%2C%20Roig-Vila%2C%20R.%2C%20%26amp%3B%20Cazorla%2C%20M.%20%282026%29.%20IPA%202.0%3A%20Validation%20of%20an%20Interpretable%20Emotion-Attention%20Index%20for%20Neuro-Adaptive%20Learning%20with%20AI.%20%26lt%3Bi%26gt%3BApplied%20Sciences%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B16%26lt%3B%5C%2Fi%26gt%3B%285%29.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.3390%5C%2Fapp16052515%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.3390%5C%2Fapp16052515%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DUX89WS45%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22IPA%202.0%3A%20Validation%20of%20an%20Interpretable%20Emotion-Attention%20Index%20for%20Neuro-Adaptive%20Learning%20with%20AI%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Javier%22%2C%22lastName%22%3A%22Arranz-Romero%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rosabel%22%2C%22lastName%22%3A%22Roig-Vila%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Miguel%22%2C%22lastName%22%3A%22Cazorla%22%7D%5D%2C%22abstractNote%22%3A%22Adaptive%20learning%20systems%20increasingly%20rely%20on%20multimodal%20affective%20computing%2C%20yet%20many%20pipelines%20remain%20difficult%20to%20audit%20and%20pedagogically%20justify....%22%2C%22date%22%3A%222026-03-04%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.3390%5C%2Fapp16052515%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F2076-3417%5C%2F16%5C%2F5%5C%2F2515%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%222076-3417%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22QU8C5IPY%22%5D%2C%22dateModified%22%3A%222026-03-10T01%3A16%3A21Z%22%7D%7D%2C%7B%22key%22%3A%223SXJD6ID%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A11531701%2C%22username%22%3A%22Dawn202209%22%2C%22name%22%3A%22%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fdawn202209%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Jayawardena%20et%20al.%22%2C%22parsedDate%22%3A%222025-12%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BJayawardena%2C%20G.%2C%20Jayawardana%2C%20Y.%2C%20%26amp%3B%20Gwizdka%2C%20J.%20%282025%29.%20Measuring%20Mental%20Effort%20in%20Real%20Time%20Using%20Pupillometry.%20%26lt%3Bi%26gt%3BJournal%20of%20Eye%20Movement%20Research%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B18%26lt%3B%5C%2Fi%26gt%3B%286%29%2C%2070.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.3390%5C%2Fjemr18060070%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.3390%5C%2Fjemr18060070%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3D3SXJD6ID%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Measuring%20Mental%20Effort%20in%20Real%20Time%20Using%20Pupillometry%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gavindya%22%2C%22lastName%22%3A%22Jayawardena%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yasith%22%2C%22lastName%22%3A%22Jayawardana%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jacek%22%2C%22lastName%22%3A%22Gwizdka%22%7D%5D%2C%22abstractNote%22%3A%22Mental%20effort%2C%20a%20critical%20factor%20influencing%20task%20performance%2C%20is%20often%20difficult%20to%20measure%20accurately%20and%20efficiently.%20Pupil%20diameter%20has%20emerged%20as%20a%20reliable%2C%20real-time%20indicator%20of%20mental%20effort.%20This%20study%20introduces%20RIPA2%2C%20an%20enhanced%20pupillometric%20index%20for%20real-time%20mental%20effort%20assessment.%20Building%20on%20the%20original%20RIPA%20method%2C%20RIPA2%20incorporates%20refined%20Savitzky%5Cu2013Golay%20filter%20parameters%20to%20better%20isolate%20pupil%20diameter%20fluctuations%20within%20biologically%20relevant%20frequency%20bands%20linked%20to%20cognitive%20load.%20We%20validated%20RIPA2%20across%20two%20distinct%20tasks%3A%20a%20structured%20N-back%20memory%20task%20and%20a%20naturalistic%20information%20search%20task%20involving%20fact-checking%20and%20decision-making%20scenarios.%20Our%20findings%20show%20that%20RIPA2%20reliably%20tracks%20variations%20in%20mental%20effort%2C%20demonstrating%20improved%20sensitivity%20and%20consistency%20over%20the%20original%20RIPA%20and%20strong%20alignment%20with%20the%20established%20offline%20measures%20of%20pupil-based%20cognitive%20load%20indices%2C%20such%20as%20LHIPA.%20Notably%2C%20RIPA2%20captured%20increased%20mental%20effort%20at%20higher%20N-back%20levels%20and%20successfully%20distinguished%20greater%20effort%20during%20decision-making%20tasks%20compared%20to%20fact-checking%20tasks%2C%20highlighting%20its%20applicability%20to%20real-world%20cognitive%20demands.%20These%20findings%20suggest%20that%20RIPA2%20provides%20a%20robust%2C%20continuous%2C%20and%20low-latency%20method%20for%20assessing%20mental%20effort.%20It%20holds%20strong%20potential%20for%20broader%20use%20in%20educational%20settings%2C%20medical%20environments%2C%20workplaces%2C%20and%20adaptive%20user%20interfaces%2C%20facilitating%20objective%20monitoring%20of%20mental%20effort%20beyond%20laboratory%20conditions.%22%2C%22date%22%3A%222025%5C%2F12%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.3390%5C%2Fjemr18060070%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.mdpi.com%5C%2F1995-8692%5C%2F18%5C%2F6%5C%2F70%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%221995-8692%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22IC7YILZX%22%5D%2C%22dateModified%22%3A%222025-11-24T22%3A43%3A17Z%22%7D%7D%2C%7B%22key%22%3A%22ZKKYXAA3%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Senova%20et%20al.%22%2C%22parsedDate%22%3A%222025-11-07%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSenova%2C%20S.%2C%20Palfi%2C%20S.%2C%20Cretenoud%2C%20A.%2C%20Begue%2C%20J.%2C%20Labiod%2C%20M.%20A.%2C%20Mellouk%2C%20A.%2C%20Wolkenstein%2C%20P.%2C%20Dauguet%2C%20J.%2C%20%26amp%3B%20Mainar%2C%20P.%20%282025%29.%20Real-time%20and%20personalized%20dry%20EEG%20neurofeedback%20increased%20students%26%23x2019%3B%20attention%20during%20online%20teaching%20in%20everyday%20life%20conditions.%20%26lt%3Bi%26gt%3BNeurocomputing%26lt%3B%5C%2Fi%26gt%3B%2C%20132045.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1016%5C%2Fj.neucom.2025.132045%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1016%5C%2Fj.neucom.2025.132045%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DZKKYXAA3%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Real-time%20and%20personalized%20dry%20EEG%20neurofeedback%20increased%20students%5Cu2019%20attention%20during%20online%20teaching%20in%20everyday%20life%20conditions%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Suhan%22%2C%22lastName%22%3A%22Senova%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22St%5Cu00e9phane%22%2C%22lastName%22%3A%22Palfi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Aline%22%2C%22lastName%22%3A%22Cretenoud%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Joshua%22%2C%22lastName%22%3A%22Begue%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mohamed%20Aymen%22%2C%22lastName%22%3A%22Labiod%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Abdelhamid%22%2C%22lastName%22%3A%22Mellouk%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pierre%22%2C%22lastName%22%3A%22Wolkenstein%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Julien%22%2C%22lastName%22%3A%22Dauguet%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pablo%22%2C%22lastName%22%3A%22Mainar%22%7D%5D%2C%22abstractNote%22%3A%22Attention%20tracking%20in%20education%20can%20serve%20as%20a%20valuable%20tool%20for%20both%20students%20and%20teachers%2C%20particularly%20in%20the%20context%20of%20online%20learning.%20The%20objectives%20of%20our%20study%20were%20to%20establish%20the%20feasibility%20of%20developing%20a%20non-medical%20and%20low-cost%20dry-EEG%20neurofeedback%20system%20for%20the%20general%20public%2C%20usable%20in%20everyday%20life%2C%20out%20of%20the%20laboratory%2C%20with%20no%20need%20for%20further%20expert%20neurophysiological%20assistance%2C%20in%20order%20primarily%20to%20monitor%20and%20secondarily%20to%20optimize%20the%20attention%20levels%20of%20university%20students%20while%20watching%20e-learning%20videos.%20A%20neuromarker%20for%20attention%20was%20successfully%20determined%20using%20wearable%20electroencephalography%20%28EEG%29%20in%20real-life%20conditions.%20Assessing%20the%20pedagogical%20benefits%20of%20two%20types%20of%20multimodal%20neurofeedback%2C%20either%20personalized%20%28for%20each%20student%20based%20on%20their%20individual%20attention%20level%29%20or%20aggregate%20%28identical%20for%20all%20students%20based%20on%20a%20previous%20cohort%5Cu2019s%20average%20attention%20level%29%2C%20we%20demonstrated%20that%20personalized%20neurofeedback%20significantly%20increased%20students%5Cu2019%20attention%20levels%20following%20an%20attention%20drop.%20On%20the%20contrary%2C%20the%20aggregate%20neurofeedback%20yielded%20no%20positive%20impact%20on%20attention%20and%20was%20perceived%20as%20disruptive.%20This%20study%20underlines%20the%20feasibility%20and%20benefits%20of%20providing%20personalized%20feedback%20tailored%20to%20individual%20students%20during%20online%20learning%20in%20everyday%20life%20conditions%2C%20outside%20of%20the%20laboratory.%22%2C%22date%22%3A%222025-11-07%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1016%5C%2Fj.neucom.2025.132045%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.sciencedirect.com%5C%2Fscience%5C%2Farticle%5C%2Fpii%5C%2FS0925231225027171%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%220925-2312%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22AD3FF8M3%22%5D%2C%22dateModified%22%3A%222025-11-10T19%3A59%3A19Z%22%7D%7D%2C%7B%22key%22%3A%22DIHQ8PBX%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Spina%20et%20al.%22%2C%22parsedDate%22%3A%222025-10-10%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSpina%2C%20D.%2C%20Gwizdka%2C%20J.%2C%20Ji%2C%20K.%2C%20Moshfeghi%2C%20Y.%2C%20Mostafa%2C%20J.%2C%20Ruotsalo%2C%20T.%2C%20Zhang%2C%20M.%2C%20Ahmad%2C%20A.%2C%20Lawati%2C%20S.%20F.%20D.%20A.%2C%20Boonprakong%2C%20N.%2C%20Fernando%2C%20N.%2C%20He%2C%20J.%2C%20Hoeber%2C%20O.%2C%20Jayawardena%2C%20G.%2C%20Lee%2C%20B.-G.%2C%20Liu%2C%20H.%2C%20Pike%2C%20M.%2C%20Pirmoradi%2C%20A.%2C%20Nakisa%2C%20B.%2C%20%26%23x2026%3B%20Wilson%2C%20M.%20L.%20%282025%29.%20Report%20on%20the%203rd%20Workshop%20on%20NeuroPhysiological%20Approaches%20for%20Interactive%20Information%20Retrieval%20%28NeuroPhysIIR%202025%29%20at%20SIGIR%20CHIIR%202025.%20%26lt%3Bi%26gt%3BSIGIR%20Forum%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B59%26lt%3B%5C%2Fi%26gt%3B%281%29%2C%201%26%23x2013%3B43.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3769733.3769740%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3769733.3769740%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DDIHQ8PBX%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Report%20on%20the%203rd%20Workshop%20on%20NeuroPhysiological%20Approaches%20for%20Interactive%20Information%20Retrieval%20%28NeuroPhysIIR%202025%29%20at%20SIGIR%20CHIIR%202025%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Damiano%22%2C%22lastName%22%3A%22Spina%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jacek%22%2C%22lastName%22%3A%22Gwizdka%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kaixin%22%2C%22lastName%22%3A%22Ji%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yashar%22%2C%22lastName%22%3A%22Moshfeghi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Javed%22%2C%22lastName%22%3A%22Mostafa%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tuukka%22%2C%22lastName%22%3A%22Ruotsalo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Adnan%22%2C%22lastName%22%3A%22Ahmad%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sara%20Fahad%20Dawood%20Al%22%2C%22lastName%22%3A%22Lawati%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nattapat%22%2C%22lastName%22%3A%22Boonprakong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nishani%22%2C%22lastName%22%3A%22Fernando%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiaman%22%2C%22lastName%22%3A%22He%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Orland%22%2C%22lastName%22%3A%22Hoeber%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gavindya%22%2C%22lastName%22%3A%22Jayawardena%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Boon-Giin%22%2C%22lastName%22%3A%22Lee%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haiming%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Matthew%22%2C%22lastName%22%3A%22Pike%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Abbas%22%2C%22lastName%22%3A%22Pirmoradi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bahareh%22%2C%22lastName%22%3A%22Nakisa%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mohammad%20Naim%22%2C%22lastName%22%3A%22Rastgoo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Flora%20D.%22%2C%22lastName%22%3A%22Salim%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fletcher%22%2C%22lastName%22%3A%22Scott%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shuoqi%22%2C%22lastName%22%3A%22Sun%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Huimin%22%2C%22lastName%22%3A%22Tang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dave%22%2C%22lastName%22%3A%22Towey%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Max%20L.%22%2C%22lastName%22%3A%22Wilson%22%7D%5D%2C%22abstractNote%22%3A%22The%20International%20Workshop%20on%20NeuroPhysiological%20Approaches%20for%20Interactive%20Information%20Retrieval%20%28NeuroPhysIIR%26%23039%3B25%29%2C%20co-located%20with%20ACM%20SIGIR%20CHIIR%202025%20in%20Naarm%5C%2FMelbourne%2C%20Australia%2C%20included%2019%20participants%20who%20discussed%2012%20statements%20addressing%20open%20challenges%20in%20neurophysiological%20interactive%20IR.%20The%20report%20summarizes%20the%20statements%20presented%20and%20the%20discussions%20held%20at%20the%20full-day%20workshop.Date%3A%2027%20March%202025.Website%3A%20https%3A%5C%2F%5C%2Fneurophysiir.github.io%5C%2Fchiir2025%5C%2F.%22%2C%22date%22%3A%22October%2010%2C%202025%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3769733.3769740%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3769733.3769740%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%220163-5840%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22IC7YILZX%22%5D%2C%22dateModified%22%3A%222025-11-24T22%3A43%3A12Z%22%7D%7D%2C%7B%22key%22%3A%22FR5J58M4%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Wei%20et%20al.%22%2C%22parsedDate%22%3A%222025-10-01%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BWei%2C%20L.%2C%20Yu%2C%20Y.%2C%20Qin%2C%20Y.%2C%20%26amp%3B%20Zhang%2C%20S.%20%282025%29.%20A%20Survey%20of%20EEG-Based%20Approaches%20to%20Classroom%20Attention%20Assessment%20in%20Education.%20%26lt%3Bi%26gt%3BInformation%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B16%26lt%3B%5C%2Fi%26gt%3B%2810%29%2C%20860.%20%28188954766%29.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.3390%5C%2Finfo16100860%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.3390%5C%2Finfo16100860%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DFR5J58M4%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22A%20Survey%20of%20EEG-Based%20Approaches%20to%20Classroom%20Attention%20Assessment%20in%20Education.%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lijun%22%2C%22lastName%22%3A%22Wei%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuanyu%22%2C%22lastName%22%3A%22Yu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuping%22%2C%22lastName%22%3A%22Qin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shuang%22%2C%22lastName%22%3A%22Zhang%22%7D%5D%2C%22abstractNote%22%3A%22In%20evaluating%20classroom%20teaching%20quality%2C%20students%26%23039%3B%20attention%20assessment%20is%20a%20critical%20indicator%20in%20education%20management%2C%20as%20it%20holds%20significant%20practical%20value%20for%20improving%20teaching%20methods%20and%20instructional%20quality.%20Electroencephalogram%20%28EEG%29%20signals%20can%20monitor%20dynamic%20neural%20activity%20in%20the%20brain%20in%20real%20time.%20Their%20objectivity%20and%20non-invasive%20nature%20make%20them%20particularly%20suitable%20for%20attention%20assessment%20in%20classroom%20environments.%20This%20article%20first%20provides%20a%20brief%20overview%20of%20existing%20attention%20assessment%20methods%2C%20and%20then%20presents%20a%20comprehensive%20review%20of%20the%20current%20research%20status%20and%20methodologies%20in%20EEG-based%20attention%20assessment%2C%20including%20signal%20acquisition%2C%20preprocessing%2C%20feature%20extraction%20and%20selection%2C%20classification%2C%20and%20evaluation.%20Subsequently%2C%20the%20challenges%20in%20EEG-based%20teaching%20attention%20assessment%20are%20discussed%2C%20including%20the%20acquisition%20of%20high-quality%20signals%2C%20multimodal%20data%20fusion%2C%20complexity%20of%20data%2C%20and%20hardware%20setups%20for%20deep%20learning%20method%20implementation.%20Finally%2C%20a%20multimodal%20classroom%20attention%20assessment%20method%2C%20which%20integrates%20EEG%20and%20eye%20movement%20signals%2C%20is%20proposed%20to%20enhance%20teaching%20management.%22%2C%22date%22%3A%222025-10-01%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.3390%5C%2Finfo16100860%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fresearch.ebsco.com%5C%2Flinkprocessor%5C%2Fplink%3Fid%3D8ddc5a8e-26e8-3fe1-8683-f388408e6fd5%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%2220782489%22%2C%22language%22%3A%22English%22%2C%22collections%22%3A%5B%22AD3FF8M3%22%5D%2C%22dateModified%22%3A%222025-11-12T22%3A31%3A03Z%22%7D%7D%2C%7B%22key%22%3A%22RG7PH35C%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Liu%20et%20al.%22%2C%22parsedDate%22%3A%222025-10%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLiu%2C%20H.%2C%20Gwizdka%2C%20J.%2C%20%26amp%3B%20Lease%2C%20M.%20%282025%29.%20Exploring%20Multidimensional%20Checkworthiness%3A%20Designing%20AI-assisted%20Claim%20Prioritization%20for%20Human%20Fact-checkers.%20%26lt%3Bi%26gt%3BCSCW%26%23x2019%3B2025%2C%20Proc.%20ACM%20Hum.-Comput.%20Interact.%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B9%26lt%3B%5C%2Fi%26gt%3B%287%29.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3757473%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3757473%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DRG7PH35C%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Exploring%20Multidimensional%20Checkworthiness%3A%20Designing%20AI-assisted%20Claim%20Prioritization%20for%20Human%20Fact-checkers%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Houjiang%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jacek%22%2C%22lastName%22%3A%22Gwizdka%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Matthew%22%2C%22lastName%22%3A%22Lease%22%7D%5D%2C%22abstractNote%22%3A%22Given%20the%20massive%20volume%20of%20potentially%20false%20claims%20circulating%20online%2C%20claim%20prioritization%20is%20essential%20in%20allocating%20limited%20human%20resources%20available%20for%20fact-checking.%20In%20this%20study%2C%20we%20perceive%20claim%20prioritization%20as%20an%20information%20retrieval%20%28IR%29%20task%3A%20just%20as%20multidimensional%20IR%20relevance%2C%20with%20many%20factors%20influencing%20which%20search%20results%20a%20user%20deems%20relevant%2C%20checkworthiness%20is%20also%20multi-faceted%2C%20subjective%2C%20and%20even%20personal%2C%20with%20many%20factors%20influencing%20how%20fact-checkers%20triage%20and%20select%20which%20claims%20to%20check.%20Our%20study%20investigates%20both%20the%20multidimensional%20nature%20of%20checkworthiness%20and%20effective%20tool%20support%20to%20assist%20fact-checkers%20in%20claim%20prioritization.%20Methodologically%2C%20we%20pursue%20Research%20through%20Design%20combined%20with%20mixed-method%20evaluation.%20We%20develop%20an%20AI-assisted%20claim%20prioritization%20prototype%20as%20a%20probe%20to%20explore%20how%20fact-checkers%20use%20multidimensional%20checkworthiness%20factors%20in%20claim%20prioritization%2C%20simultaneously%20probing%20fact-checker%20needs%20while%20also%20exploring%20the%20design%20space%20to%20meet%20those%20needs.%20Our%20study%20with%2016%20professional%20fact-checkers%20investigates%3A%201%29%20how%20participants%20assessed%20the%20relative%20importance%20of%20different%20checkworthy%20dimensions%20and%20apply%20different%20priorities%20in%20claim%20selection%3B%202%29%20how%20they%20created%20customized%20GPT-based%20search%20filters%20and%20the%20corresponding%20benefits%20and%20limitations%3B%20and%203%29%20their%20overall%20user%20experiences%20with%20our%20prototype.%20Our%20work%20makes%20a%20conceptual%20contribution%20between%20multidimensional%20IR%20relevance%20and%20fact-checking%20checkworthiness%2C%20with%20findings%20demonstrating%20the%20value%20of%20corresponding%20tooling%20support.%20Specifically%2C%20we%20uncovered%20a%20hierarchical%20prioritization%20strategy%20fact-checkers%20implicitly%20use%2C%20revealing%20an%20underexplored%20aspect%20of%20their%20workflow%2C%20with%20actionable%20design%20recommendations%20for%20improving%20claim%20triage%20across%20multi-dimensional%20checkworthiness%20and%20tailoring%20this%20process%20with%20LLM%20integration.%22%2C%22date%22%3A%222025-10%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3757473%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2412.08185%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22IC7YILZX%22%5D%2C%22dateModified%22%3A%222025-11-24T22%3A44%3A30Z%22%7D%7D%2C%7B%22key%22%3A%22YSVXF5QB%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Dang%20et%20al.%22%2C%22parsedDate%22%3A%222025-08-20%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BDang%2C%20Q.%2C%20Kucukosmanoglu%2C%20M.%2C%20Anoruo%2C%20M.%2C%20Kargosha%2C%20G.%2C%20Conklin%2C%20S.%2C%20%26amp%3B%20Brooks%2C%20J.%20%282025%29.%20Automatic%20detection%20of%20cognitive%20events%20using%20machine%20learning%20and%20understanding%20models%26%23x2019%3B%20interpretations%20of%20human%20cognition.%20%26lt%3Bi%26gt%3BScientific%20Reports%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B15%26lt%3B%5C%2Fi%26gt%3B%281%29%2C%2030506.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1038%5C%2Fs41598-025-16165-4%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1038%5C%2Fs41598-025-16165-4%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DYSVXF5QB%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Automatic%20detection%20of%20cognitive%20events%20using%20machine%20learning%20and%20understanding%20models%5Cu2019%20interpretations%20of%20human%20cognition%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Quang%22%2C%22lastName%22%3A%22Dang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Murat%22%2C%22lastName%22%3A%22Kucukosmanoglu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Anoruo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Golshan%22%2C%22lastName%22%3A%22Kargosha%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sarah%22%2C%22lastName%22%3A%22Conklin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Justin%22%2C%22lastName%22%3A%22Brooks%22%7D%5D%2C%22abstractNote%22%3A%22Abstract%5Cn%20%20%20%20%20%20%20%20%20%20%20%20The%20pupillary%20response%20is%20a%20valuable%20indicator%20of%20cognitive%20workload%2C%20capturing%20fluctuations%20in%20attention%20and%20arousal%20governed%20by%20the%20autonomic%20nervous%20system.%20Cognitive%20events%2C%20defined%20as%20the%20initiation%20of%20mental%20processes%2C%20are%20closely%20linked%20to%20cognitive%20workload%20as%20they%20trigger%20cognitive%20responses.%20In%20this%20study%2C%20we%20detect%20cognitive%20events%20for%20the%20task-evoked%20pupillary%20response%20across%20four%20domains%20%28vigilance%2C%20emotion%20processing%2C%20numerical%20reasoning%2C%20and%20short-term%20memory%29.%20The%20problem%20is%20framed%20as%20a%20binary%20classification.%20We%20train%20one%20generalized%20model%20and%20four%20task-specific%20models%20on%201-s%20pupil%20diameter%20and%20gaze%20position%20segments.%20Five%20models%20achieve%20MCC%20between%200.43%20and%200.75.%20We%20report%20three%20key%20findings%3A%20%281%29%20the%20generalized%20model%20reduces%20the%20specificity%20to%20enhance%20the%20sensitivity%2C%20illustrating%20the%20trade-off%20from%20specialization%20to%20generalization%3B%20%282%29%20the%20permutation%20feature%20importance%20analyses%20show%20that%20both%20pupil%20dilation%20and%20gaze%20position%20contribute%20to%20model%20predictions%2C%20with%20task-specific%20models%20focusing%20on%20task-specific%20structure%20patterns%20to%20predict%20while%20the%20generalized%20model%20is%20using%20human%20cognitive%20responses%3B%20and%20%283%29%20in%20an%20online%20simulation%20environment%2C%20models%20performance%20decreases%20by%20approximately%200.05%20on%20MCC.%20The%20findings%20highlight%20the%20potential%20of%20machine%20learning%20applied%20to%20pupillary%20signals%20for%20rapid%2C%20individualized%20detection%20of%20cognitive%20events.%22%2C%22date%22%3A%222025-08-20%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1038%5C%2Fs41598-025-16165-4%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.nature.com%5C%2Farticles%5C%2Fs41598-025-16165-4%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%222045-2322%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22PGU62QBY%22%5D%2C%22dateModified%22%3A%222025-11-19T00%3A24%3A32Z%22%7D%7D%2C%7B%22key%22%3A%22RUQDFR73%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Latifzadeh%20et%20al.%22%2C%22parsedDate%22%3A%222025-07-13%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLatifzadeh%2C%20K.%2C%20Gwizdka%2C%20J.%2C%20%26amp%3B%20Leiva%2C%20L.%20A.%20%282025%29.%20A%20Versatile%20Dataset%20of%20Mouse%20and%20Eye%20Movements%20on%20Search%20Engine%20Results%20Pages.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%2048th%20International%20ACM%20SIGIR%20Conference%20on%20Research%20and%20Development%20in%20Information%20Retrieval%2C%20SIGIR%20%26%23x2019%3B25%26lt%3B%5C%2Fi%26gt%3B%2C%203412%26%23x2013%3B3421.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3726302.3730325%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3726302.3730325%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DRUQDFR73%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22A%20Versatile%20Dataset%20of%20Mouse%20and%20Eye%20Movements%20on%20Search%20Engine%20Results%20Pages%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kayhan%22%2C%22lastName%22%3A%22Latifzadeh%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jacek%22%2C%22lastName%22%3A%22Gwizdka%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Luis%20A.%22%2C%22lastName%22%3A%22Leiva%22%7D%5D%2C%22abstractNote%22%3A%22We%20contribute%20a%20comprehensive%20dataset%20to%20study%20user%20attention%20and%20purchasing%20behavior%20on%20Search%20Engine%20Result%20Pages%20%28SERPs%29.%20Previous%20work%20has%20relied%20on%20mouse%20movements%20as%20a%20low-cost%20large-scale%20behavioral%20proxy%20but%20also%20has%20relied%20on%20self-reported%20ground-truth%20labels%2C%20collected%20at%20post-task%2C%20which%20can%20be%20inaccurate%20and%20prone%20to%20biases.%20To%20address%20this%20limitation%2C%20we%20use%20an%20eye%20tracker%20to%20construct%20an%20objective%20ground-truth%20of%20continuous%20visual%20attention.%20Our%20dataset%20comprises%202%2C776%20transactional%20queries%20on%20Google%20SERPs%2C%20collected%20from%2047%20participants%2C%20and%20includes%3A%20%281%29~HTML%20source%20files%2C%20with%20CSS%20and%20images%3B%20%282%29~rendered%20SERP%20screenshots%3B%20%283%29~eye%20movement%20data%3B%20%284%29~mouse%20movement%20data%3B%20%285%29~bounding%20boxes%20of%20direct%20display%20and%20organic%20advertisements%3B%20and%20%286%29~scripts%20for%20further%20preprocessing%20the%20data.%20In%20this%20paper%20we%20provide%20an%20overview%20of%20the%20dataset%20and%20baseline%20experiments%20%28classification%20tasks%29%20that%20can%20inspire%20researchers%20about%20the%20different%20possibilities%20for%20future%20work.%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%2048th%20International%20ACM%20SIGIR%20Conference%20on%20Research%20and%20Development%20in%20Information%20Retrieval%22%2C%22conferenceName%22%3A%22%22%2C%22date%22%3A%22July%2013%2C%202025%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3726302.3730325%22%2C%22ISBN%22%3A%22979-8-4007-1592-1%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3726302.3730325%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22IC7YILZX%22%5D%2C%22dateModified%22%3A%222025-11-24T22%3A43%3A12Z%22%7D%7D%2C%7B%22key%22%3A%22WWRI5SZ8%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Silvan%20et%20al.%22%2C%22parsedDate%22%3A%222025-07%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSilvan%2C%20A.%2C%20Parra%2C%20L.%20C.%2C%20%26amp%3B%20Madsen%2C%20J.%20%282025%29.%20Real-Time%20Estimation%20of%20Overt%20Attention%20from%20Dynamic%20Features%20of%20the%20Face%20Using%20Deep%20Learning.%20%26lt%3Bi%26gt%3B2025%2047th%20Annual%20International%20Conference%20of%20the%20IEEE%20Engineering%20in%20Medicine%20and%20Biology%20Society%20%28EMBC%29%26lt%3B%5C%2Fi%26gt%3B%2C%201%26%23x2013%3B5.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FEMBC58623.2025.11254013%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FEMBC58623.2025.11254013%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DWWRI5SZ8%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Real-Time%20Estimation%20of%20Overt%20Attention%20from%20Dynamic%20Features%20of%20the%20Face%20Using%20Deep%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Aimar%22%2C%22lastName%22%3A%22Silvan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Lucas%20C.%22%2C%22lastName%22%3A%22Parra%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jens%22%2C%22lastName%22%3A%22Madsen%22%7D%5D%2C%22abstractNote%22%3A%22Students%20often%20drift%20in%20and%20out%20of%20focus%20during%20class.%20Effective%20teachers%20recognize%20this%20and%20re-engage%20them%20when%20necessary.%20With%20the%20shift%20to%20remote%20learning%2C%20teachers%20have%20lost%20the%20visual%20feedback%20needed%20to%20adapt%20to%20varying%20student%20engagement.%20Existing%20methods%20for%20measuring%20student%20attention%20often%20rely%20on%20manual%20checklists%20or%20subjective%20labels%2C%20making%20them%20labor-intensive%20and%20subjective.%20Meanwhile%2C%20prior%20research%20has%20shown%20that%20shared%20brain%20activity%20or%20gaze%20synchronization%20while%20attending%20to%20a%20common%20stimulus%20can%20reliably%20differentiate%20attentive%20students%20from%20those%20who%20are%20distracted.%20Here%2C%20we%20use%20Inter-Subject%20Correlation%20%28ISC%29%20of%20eye%20movements%20as%20an%20index%20of%20attention%20and%20train%20an%20AI%20to%20predict%20attention%20from%20a%20single%20subject%5Cu2019s%20facial%20dynamics%20alone.%20This%20removes%20the%20need%20for%20a%20reference%20group%20or%20human%20labeling%20of%20attention.%20In%20three%20different%20experiments%20%28N%3D83%29%2C%20our%20deep%20neural%20model%20explained%20up%20to%2038%25%20of%20the%20variance%20%28R2%3D0.38%29%20for%20known-subject%20data%20and%2026%5Cu201330%25%20%28R2%3D0.26%5Cu20130.30%29%20for%20new%20subjects%2C%20capturing%20time-resolved%20overt%20attention.%20Furthermore%2C%20the%20estimated%20attention%20values%20correlated%20with%20post-video%20test%20scores%20%28r%3D0.41-0.49%29%2C%20effectively%20capturing%20performance-relevant%20attention.%20A%20feature-ablation%20analysis%20revealed%20that%20eye%20and%20head%20movements%20most%20strongly%20drive%20the%20model%5Cu2019s%20predictions.%20The%20proposed%20method%20offers%20a%20novel%20scalable%20solution%20that%20is%20objective%2C%20does%20not%20require%20extensive%20human%20labeling%2C%20and%20is%20scalable%20to%20real-time%2C%20privacy-preserving%20engagement%20monitoring%20in%20remote%20education.%22%2C%22proceedingsTitle%22%3A%222025%2047th%20Annual%20International%20Conference%20of%20the%20IEEE%20Engineering%20in%20Medicine%20and%20Biology%20Society%20%28EMBC%29%22%2C%22conferenceName%22%3A%222025%2047th%20Annual%20International%20Conference%20of%20the%20IEEE%20Engineering%20in%20Medicine%20and%20Biology%20Society%20%28EMBC%29%22%2C%22date%22%3A%222025-07%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FEMBC58623.2025.11254013%22%2C%22ISBN%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fieeexplore.ieee.org%5C%2Fdocument%5C%2F11254013%5C%2F%22%2C%22ISSN%22%3A%222694-0604%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22AD3FF8M3%22%5D%2C%22dateModified%22%3A%222026-03-16T17%3A56%3A05Z%22%7D%7D%2C%7B%22key%22%3A%222CHZ65UN%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Jayawardena%20et%20al.%22%2C%22parsedDate%22%3A%222025-05-25%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BJayawardena%2C%20G.%2C%20Jayawardana%2C%20Y.%2C%20Abeysinghe%2C%20Y.%2C%20Mahanama%2C%20B.%2C%20Jayarathna%2C%20S.%2C%20%26amp%3B%20Gwizdka%2C%20J.%20%282025%29.%20A%20Real-Time%20Approach%20to%20Capture%20Ambient%20and%20Focal%20Attention%20in%20Visual%20Search.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%202025%20Symposium%20on%20Eye%20Tracking%20Research%20and%20Applications%2C%20ETRA%20%26%23x2019%3B25%26lt%3B%5C%2Fi%26gt%3B%2C%201%26%23x2013%3B7.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3715669.3723111%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3715669.3723111%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3D2CHZ65UN%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22A%20Real-Time%20Approach%20to%20Capture%20Ambient%20and%20Focal%20Attention%20in%20Visual%20Search%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Gavindya%22%2C%22lastName%22%3A%22Jayawardena%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yasith%22%2C%22lastName%22%3A%22Jayawardana%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yasasi%22%2C%22lastName%22%3A%22Abeysinghe%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bhanuka%22%2C%22lastName%22%3A%22Mahanama%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sampath%22%2C%22lastName%22%3A%22Jayarathna%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jacek%22%2C%22lastName%22%3A%22Gwizdka%22%7D%5D%2C%22abstractNote%22%3A%22During%20visual%20search%2C%20individuals%5Cu2019%20attention%20shifts%20between%20ambient%20and%20focal%20states%20in%20response%20to%20task%20demands%20and%20stimuli.%20The%20ambient%5C%2Ffocal%20coefficient%20K%20is%20a%20statistically%20validated%20measure%20of%20these%20states%2C%20computed%20offline%20from%20fixation%20duration%20and%20saccade%20amplitude%20data.%20While%20current%20methods%20compute%20K%20offline%2C%20real-time%20computation%20could%20enable%20applications%20such%20as%20monitoring%20user%20attention%2C%20creating%20attention-adaptive%20user%20interfaces%2C%20and%20optimizing%20graphics%20rendering.%20However%2C%20real-time%20computation%20of%20K%20requires%20stable%20estimates%20for%20the%20parameters%20of%20fixation%20duration%20and%20saccade%20amplitude%20distributions.%20Since%20these%20distributions%20are%20heavy-tailed%2C%20the%20real-time%20estimates%20exhibit%20high%20variance%20and%20slow%20convergence.%20To%20overcome%20this%2C%20we%20propose%20a%20robust%20parametrization%20and%20an%20alternative%20estimation%20method%2C%20along%20with%20two%20real-time%20measures%20analogous%20to%20K.%20Through%20a%20map%20viewing%20study%20involving%20localization%20and%20route%20planning%20tasks%2C%20we%20show%20that%20our%20proposed%20measures%20exhibit%20dynamics%20consistent%20with%20offline%20K.%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202025%20Symposium%20on%20Eye%20Tracking%20Research%20and%20Applications%22%2C%22conferenceName%22%3A%22%22%2C%22date%22%3A%22May%2025%2C%202025%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3715669.3723111%22%2C%22ISBN%22%3A%22979-8-4007-1487-0%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3715669.3723111%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22IC7YILZX%22%5D%2C%22dateModified%22%3A%222025-11-24T22%3A43%3A19Z%22%7D%7D%2C%7B%22key%22%3A%22MKS2M9T6%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Gollan%20and%20Raggam%22%2C%22parsedDate%22%3A%222025-05-22%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGollan%2C%20B.%2C%20%26amp%3B%20Raggam%2C%20P.%20%282025%29.%20Beyond%20Gaze%3A%20Quantifying%20Conscious%20Perception%20Through%20an%20Innovative%20Eye%20Tracking%20Biomarker.%20%26lt%3Bi%26gt%3BProc.%20ACM%20Hum.-Comput.%20Interact.%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B9%26lt%3B%5C%2Fi%26gt%3B%283%29%2C%20ETRA06%3A1-ETRA06%3A17.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3725831%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3725831%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DMKS2M9T6%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Beyond%20Gaze%3A%20Quantifying%20Conscious%20Perception%20Through%20an%20Innovative%20Eye%20Tracking%20Biomarker%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Benedikt%22%2C%22lastName%22%3A%22Gollan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Philipp%22%2C%22lastName%22%3A%22Raggam%22%7D%5D%2C%22abstractNote%22%3A%22This%20paper%20introduces%20a%20novel%20digital%20biomarker%20based%20on%20eye%20tracking%20data%2C%20the%20Conscious%20Perception%20Index%20%28CPI%29%2C%20designed%20to%20measure%20conscious%20perception%20by%20leveraging%20established%20eye%20gaze%20metrics.%20The%20CPI%20builds%20upon%20foundational%20eye-tracking%20markers%2C%20including%20saccades%20and%20fixation%20statistics%2C%20coefficient%20K%2C%20and%20cognitive%20load%2C%20to%20provide%20a%20continuous%2C%20quantified%20computation%20of%20perceptual%20awareness%20in%20real%20time.%20To%20evaluate%20CPI%26%23039%3Bs%20effectiveness%2C%20a%20change%20blindness%20study%20was%20conducted%20in%20a%20VR%20setting%2C%20allowing%20the%20analysis%20of%20conscious%20perception%20within%20an%20explicit%20interaction%20context.%20Findings%20demonstrate%20that%20the%20CPI%20provides%20a%20reliable%20measure%20of%20conscious%20engagement%2C%20with%20significant%20results%20in%20statistical%20analysis%20supporting%20its%20robustness.%20Classification%20via%20LogisticRegression%20is%20able%20to%20separate%20conscious%20interaction%20from%20observational%20behavior%20with%20an%20accuracy%20of%2083.3%25.%20This%20research%20underscores%20CPI%26%23039%3Bs%20potential%20to%20enhance%20eye-tracking%20applications%20in%20cognitive%20science%20and%20human-computer%20interaction%2C%20opening%20new%20avenues%20for%20measuring%20perceptual%20awareness%20and%20refining%20interactive%20technologies%20based%20on%20user%20perception.%22%2C%22date%22%3A%22May%2022%2C%202025%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3725831%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3725831%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22DPR5FC8J%22%5D%2C%22dateModified%22%3A%222025-06-30T20%3A30%3A43Z%22%7D%7D%2C%7B%22key%22%3A%225WBNWMMH%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kurzom%20et%20al.%22%2C%22parsedDate%22%3A%222025-04-30%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKurzom%2C%20N.%2C%20Misherky%2C%20J.%2C%20%26amp%3B%20Mendelsohn%2C%20A.%20%282025%29.%20The%20Effect%20of%20Background%20Music%20on%20Memory%20Formation%20of%20Spoken%20Words%3A%20A%20Tradeoff%20Between%20Tension%20Perception%20and%20Memory.%20%26lt%3Bi%26gt%3BMusic%20Perception%26lt%3B%5C%2Fi%26gt%3B%2C%201%26%23x2013%3B16.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1525%5C%2Fmp.2025.2449567%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1525%5C%2Fmp.2025.2449567%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3D5WBNWMMH%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22The%20Effect%20of%20Background%20Music%20on%20Memory%20Formation%20of%20Spoken%20Words%3A%20A%20Tradeoff%20Between%20Tension%20Perception%20and%20Memory%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Nawras%22%2C%22lastName%22%3A%22Kurzom%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Juman%22%2C%22lastName%22%3A%22Misherky%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Avi%22%2C%22lastName%22%3A%22Mendelsohn%22%7D%5D%2C%22abstractNote%22%3A%22The%20ability%20to%20selectively%20attend%20to%20speech%20within%20complex%20auditory%20environments%20is%20crucial%20for%20effective%20communication.%20The%20influence%20of%20background%20music%20on%20verbal%20learning%20has%20been%20a%20subject%20of%20debate%20in%20previous%20research.%20Background%20music%20comprises%20specific%20elements%20that%20can%20involuntarily%20capture%20or%20divert%20attention%20away%20from%20the%20primary%20task.%20The%20present%20study%20aimed%20to%20investigate%20the%20impact%20of%20instrumental%20background%20music%5Cu2014specifically%20tension%2C%20tension-resolution%2C%20and%20neutral%20segments%5Cu2014on%20the%20acquisition%20and%20later%20retention%20of%20simultaneous%20English%20spoken%20words.%20Musical%20tension%20was%20defined%20as%20the%20sensation%20caused%20by%20delaying%20the%20resolution%20of%20the%20dominant%20fifth%20harmonies.%20This%20sensation%20was%20validated%20in%20the%20study%20by%20measuring%20subjective%20reports%20of%20felt-tension%20and%20increases%20in%20pupil%20dilation%20in%20participants%20sensitive%20to%20musical%20tension.%20Our%20findings%20revealed%20that%20the%20inclusion%20of%20simultaneous%20background%20music%20during%20the%20learning%20of%20spoken%20words%20led%20to%20improved%20subsequent%20recall%2C%20as%20compared%20to%20hearing%20spoken%20words%20in%20silence.%20Furthermore%2C%20participants%20who%20subjectively%20perceived%20musical%20tension%20exhibited%20lower%20memory%20performance%20compared%20to%20those%20who%20did%20not.%20These%20findings%20provide%20insight%20into%20how%20background%20music%20can%20either%20hinder%20or%20facilitate%20the%20formation%20of%20memories%20for%20nonmusical%20stimuli%20in%20everyday%20scenarios.%22%2C%22date%22%3A%222025-04-30%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1525%5C%2Fmp.2025.2449567%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1525%5C%2Fmp.2025.2449567%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%220730-7829%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22PFABH3CI%22%5D%2C%22dateModified%22%3A%222025-06-06T13%3A52%3A02Z%22%7D%7D%2C%7B%22key%22%3A%22KG94HRGL%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Gwizdka%20and%20Cole%22%2C%22parsedDate%22%3A%222025-04-29%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGwizdka%2C%20J.%2C%20%26amp%3B%20Cole%2C%20M.%20%282025%29.%20g-Rel-READER%3A%20A%20Dataset%20for%20Relevance%20and%20Reading%20Evaluation%20through%20Advanced%20Data%20from%20Eye-tracking%20and%20EEG%20Recordings.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%2C%20CHIIR%20%26%23x2019%3B25%26lt%3B%5C%2Fi%26gt%3B%2C%20377%26%23x2013%3B381.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716474%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716474%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DKG94HRGL%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22g-Rel-READER%3A%20A%20Dataset%20for%20Relevance%20and%20Reading%20Evaluation%20through%20Advanced%20Data%20from%20Eye-tracking%20and%20EEG%20Recordings%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jacek%22%2C%22lastName%22%3A%22Gwizdka%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Cole%22%7D%5D%2C%22abstractNote%22%3A%22Information%20relevance%20judgment%20is%20a%20fundamental%20cognitive%20decision%20performed%20by%20users%20during%20information%20seeking%20and%20reading%20tasks.%20Inferring%20relevance%20decisions%20from%20neurophysiological%20signals%2C%20particularly%20through%20eye-tracking%20and%20EEG%20recordings%2C%20opens%20possibilities%20for%20real-time%20relevance%20detection.%20To%20support%20this%20line%20of%20research%2C%20we%20present%20g-Rel-READER%2C%20a%20unique%20dataset%20comprising%20eye-tracking%20and%20EEG%20recordings%20collected%20during%20reading%20and%20relevance%20judgment%20of%20short%20news%20stories%20in%20a%20question-answering%20task.%20The%20dataset%20provides%20screen%20coordinates%20for%20each%20word%2C%20enabling%20temporal%20and%20spatial%20alignment%20between%20eye-tracking%20data%2C%20EEG%20signals%2C%20and%20the%20specific%20words%20being%20read.%20Question-relevant%20words%20are%20explicitly%20marked%2C%20facilitating%20analysis%20of%20relevance%20judgment%20processes.%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22conferenceName%22%3A%22%22%2C%22date%22%3A%22April%2029%2C%202025%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3698204.3716474%22%2C%22ISBN%22%3A%22979-8-4007-1290-6%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3698204.3716474%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22IC7YILZX%22%5D%2C%22dateModified%22%3A%222025-11-24T22%3A43%3A21Z%22%7D%7D%2C%7B%22key%22%3A%22DIPKFQEM%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Gwizdka%20et%20al.%22%2C%22parsedDate%22%3A%222025-04-29%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGwizdka%2C%20J.%2C%20Mostafa%2C%20J.%2C%20Zhang%2C%20M.%2C%20Ji%2C%20K.%2C%20Moshfeghi%2C%20Y.%2C%20Ruotsalo%2C%20T.%2C%20%26amp%3B%20Spina%2C%20D.%20%282025%29.%20NeuroPhysIIR%3A%20International%20Workshop%20on%20NeuroPhysiological%20Approaches%20for%20Interactive%20Information%20Retrieval.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%2C%20CHIIR%20%26%23x2019%3B25%26lt%3B%5C%2Fi%26gt%3B%2C%20413%26%23x2013%3B415.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716481%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716481%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DDIPKFQEM%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22NeuroPhysIIR%3A%20International%20Workshop%20on%20NeuroPhysiological%20Approaches%20for%20Interactive%20Information%20Retrieval%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jacek%22%2C%22lastName%22%3A%22Gwizdka%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Javed%22%2C%22lastName%22%3A%22Mostafa%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Min%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kaixin%22%2C%22lastName%22%3A%22Ji%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yashar%22%2C%22lastName%22%3A%22Moshfeghi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Tuukka%22%2C%22lastName%22%3A%22Ruotsalo%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Damiano%22%2C%22lastName%22%3A%22Spina%22%7D%5D%2C%22abstractNote%22%3A%22The%20International%20Workshop%20on%20NeuroPhysiological%20Approaches%20for%20Interactive%20Information%20Retrieval%20%28NeuroPhysIIR%5Cu201925%29%20aims%20to%20bringing%20together%20researchers%20from%20information%20science%2C%20human-computer%20interaction%2C%20cognitive%20neuroscience%2C%20and%20related%20fields%2C%20to%20foster%20cross-disciplinary%20collaboration%20and%20accelerate%20progress%20in%20neurophysiologically-informed%20IIR%20research.%20As%20the%20third%20edition%20following%20successful%20workshops%20at%20SIGIR%5Cu201915%26nbsp%3B%5B5%5D%20and%20CHIIR%5Cu201917%26nbsp%3B%5B6%5D%2C%20we%20anticipate%20that%20the%20interactive%20nature%20of%20this%20workshop%20will%20not%20only%20raise%20awareness%20but%20also%20lower%20the%20entry%20barriers%20for%20engaging%20with%20this%20exciting%20research%20area%20within%20the%20wider%20IIR%20community.%20Workshop%20website%3A%20https%3A%5C%2F%5C%2Fneurophysiir.github.io%5C%2Fchiir2025%5C%2F.%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22conferenceName%22%3A%22%22%2C%22date%22%3A%22April%2029%2C%202025%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3698204.3716481%22%2C%22ISBN%22%3A%22979-8-4007-1290-6%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3698204.3716481%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22IC7YILZX%22%5D%2C%22dateModified%22%3A%222025-11-24T22%3A43%3A12Z%22%7D%7D%2C%7B%22key%22%3A%22NYHMJVB3%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Chavula%20and%20Kist%22%2C%22parsedDate%22%3A%222025-04-29%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BChavula%2C%20C.%2C%20%26amp%3B%20Kist%2C%20C.%20%282025%29.%20Fitting%20to%20the%20body%3A%20The%20role%20of%20embodiment%20in%20beauty%20information%20seeking.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%2C%20CHIIR%20%26%23x2019%3B25%26lt%3B%5C%2Fi%26gt%3B%2C%20140%26%23x2013%3B153.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716452%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716452%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DNYHMJVB3%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Fitting%20to%20the%20body%3A%20The%20role%20of%20embodiment%20in%20beauty%20information%20seeking%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Catherine%22%2C%22lastName%22%3A%22Chavula%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cassandra%22%2C%22lastName%22%3A%22Kist%22%7D%5D%2C%22abstractNote%22%3A%22The%20body%20has%20long%20been%20recognized%20by%20other%20disciplines%20as%20a%20site%20of%20meaning-making%2C%20a%20medium%20through%20which%20individuals%20experience%20the%20world%2C%20convey%20meaning%20to%20others%2C%20and%20act%20on%20the%20world.%20However%2C%20when%20it%20comes%20to%20searching%20for%20information%2C%20the%20body%20is%20often%20ignored%2C%20overshadowed%20by%20a%20focus%20on%20cognition.%20Studies%20are%20beginning%20to%20fill%20this%20gap%2C%20pointing%20to%20the%20role%20and%20potential%20impact%20the%20body%20can%20have%20on%20information%20practices%2C%20prompting%20further%20attention%20to%20the%20role%20of%20the%20body%20in%20information%20seeking%20and%20how%20it%20can%20inform%20the%20design%20of%20interactive%20information%20retrieval%20systems.%20Beauty%20information%20practices%2C%20due%20to%20their%20close%20association%20with%20the%20corporeal%2C%20physical%2C%20material%2C%20and%20symbolic%20nature%20of%20the%20body%2C%20potentially%20harbors%20significant%20insights%20for%20re-centering%20the%20body%20in%20information%20sciences%20and%20informing%20user-%2C%20or%20in%20this%20case%2C%20body-centered%20information%20retrieval%20systems.%20Our%20research%20aimed%20at%20answering%20the%20overarching%20question%3A%20How%20does%20%5Cu2018embodiment%5Cu2019%20manifest%20in%20beauty%20related%20information%20seeking%20using%20digital%20tools%3F%20We%20undertake%20a%20survey%20through%20Prolific%20asking%20participants%20to%20describe%20a%20recent%20beauty%20information%20search%20task%20using%20digital%20tools%2C%20encompassing%20the%20type%20of%20information%20sought%2C%20their%20motivation%2C%20and%20its%20usefulness.%20The%20findings%20reveal%20the%20significance%20of%20the%20socio-cultural%20context%20of%20users%2C%20in%20terms%20of%20both%20social%20pressure%20and%20user%20agency%2C%20and%20the%20intersection%20between%20the%20physicality%20of%20the%20body%20and%20materiality%20of%20beauty%20products%20and%20procedures%20in%20shaping%20how%20users%20search%20for%20and%20select%20useful%20information.%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22conferenceName%22%3A%22%22%2C%22date%22%3A%22April%2029%2C%202025%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3698204.3716452%22%2C%22ISBN%22%3A%22979-8-4007-1290-6%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3698204.3716452%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22WNART92P%22%5D%2C%22dateModified%22%3A%222025-09-15T23%3A12%3A13Z%22%7D%7D%2C%7B%22key%22%3A%22X8TB3DXZ%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Le%20et%20al.%22%2C%22parsedDate%22%3A%222025-04-22%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLe%2C%20D.%20D.%2C%20Kieu%2C%20H.%20D.%2C%20Le%2C%20T.%20H.%2C%20%26amp%3B%20Ngo%2C%20T.%20D.%20%282025%29.%20Attention%20detection%3A%20an%20EEG%20and%20eye%20tracking%20features%20fusion%20approach%20in%20eye-based%20interaction%20systems.%20%26lt%3Bi%26gt%3BNeural%20Computing%20and%20Applications%26lt%3B%5C%2Fi%26gt%3B.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs00521-025-11195-5%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs00521-025-11195-5%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DX8TB3DXZ%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Attention%20detection%3A%20an%20EEG%20and%20eye%20tracking%20features%20fusion%20approach%20in%20eye-based%20interaction%20systems%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Duc%20Duy%22%2C%22lastName%22%3A%22Le%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hai%20Dang%22%2C%22lastName%22%3A%22Kieu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Thanh%20Ha%22%2C%22lastName%22%3A%22Le%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Thi%20Duyen%22%2C%22lastName%22%3A%22Ngo%22%7D%5D%2C%22abstractNote%22%3A%22Eye-based%20interaction%20systems%20have%20significantly%20enhanced%20the%20quality%20of%20life%20for%20individuals%20with%20disabilities%20by%20restoring%20their%20communication%20abilities.%20These%20systems%20require%20users%20to%20fixate%20their%20gaze%20on%20a%20key%20for%20a%20specific%20duration%20until%20it%20is%20fully%20entered.%20In%20this%20study%2C%20we%20aim%20to%20evaluate%20the%20correlation%20between%20the%20user%5Cu2019s%20mental%20attention%20and%20gaze%20focus%20during%20key%20selection%2C%20and%20explore%20whether%20this%20potential%20connection%20can%20be%20harnessed%20to%20improve%20the%20efficiency%20of%20eye-based%20interaction%20systems.%20To%20address%20this%20issue%2C%20we%20have%20proposed%20a%20method%20for%20detecting%20user%20attention%2C%20while%20using%20an%20eye-controlled%20on-screen%20keyboard%20by%20integrating%20EEG%20and%20Eye-Tracking%20%28ET%29%20signals.%20Our%20hypothesis%20posits%20that%20cognitive%20attention%20and%20gaze%20coincide%20during%20key%20selection.%20After%20data%20collection%2C%20EEG%20signals%20are%20labeled%20based%20on%20ET%20signals%2C%20followed%20by%20several%20preprocessing%20steps.%20Differential%20entropy%20and%20latent%20vectors%20via%20Variational%20Autoencoder%20are%20used%20as%20EEG%20features%2C%20while%20fixation%20and%20saccade%20serve%20as%20ET%20features.%20A%20Convolutional%20Neural%20Network%20is%20employed%20to%20combine%20these%20features%20to%20determine%20the%20user%5Cu2019s%20level%20of%20attention.%20We%20recorded%20the%20EEG%20and%20ET%20signals%20of%2030%20healthy%20subjects%20using%20a%20Vietnamese%20eye-controlled%20spelling%20communication%20system.%20Our%20method%20achieves%20a%20classification%20accuracy%20of%20up%20to%2092.37%25%20with%20k-fold%20cross-validation%20and%2096.80%25%20with%20cross-subject%20validation.%20These%20findings%20indicate%20a%20correspondence%20between%20the%20user%5Cu2019s%20cognitive%20attention%20and%20gaze%20during%20key%20selection.%20The%20proposed%20method%20can%20enhance%20the%20efficiency%20of%20eye-based%20interaction%20systems%20by%20improving%20key%20selection%20speed%20and%20developing%20systems%20to%20monitor%20and%20alert%20individuals%20when%20they%20lose%20focus%20during%20activities%20such%20as%20driving%20or%20learning.%22%2C%22date%22%3A%222025-04-22%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1007%5C%2Fs00521-025-11195-5%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs00521-025-11195-5%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%221433-3058%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22AD3FF8M3%22%5D%2C%22dateModified%22%3A%222025-05-27T19%3A34%3A04Z%22%7D%7D%2C%7B%22key%22%3A%22LE4BSG5S%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Lopez-Cardona%20et%20al.%22%2C%22parsedDate%22%3A%222025-04-01%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLopez-Cardona%2C%20A.%2C%20Emami%2C%20P.%2C%20Idesis%2C%20S.%2C%20Duraisamy%2C%20S.%2C%20Leiva%2C%20L.%20A.%2C%20%26amp%3B%20Arapakis%2C%20I.%20%282025%29.%20%26lt%3Bi%26gt%3BA%20Comparative%20Study%20of%20Scanpath%20Models%20in%20Graph-Based%20Visualization%26lt%3B%5C%2Fi%26gt%3B.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F0.1145%5C%2F3715669.3725882%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F0.1145%5C%2F3715669.3725882%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DLE4BSG5S%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22A%20Comparative%20Study%20of%20Scanpath%20Models%20in%20Graph-Based%20Visualization%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Angela%22%2C%22lastName%22%3A%22Lopez-Cardona%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Parvin%22%2C%22lastName%22%3A%22Emami%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sebastian%22%2C%22lastName%22%3A%22Idesis%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Saravanakumar%22%2C%22lastName%22%3A%22Duraisamy%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Luis%20A.%22%2C%22lastName%22%3A%22Leiva%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ioannis%22%2C%22lastName%22%3A%22Arapakis%22%7D%5D%2C%22abstractNote%22%3A%22Information%20Visualization%20%28InfoVis%29%20systems%20utilize%20visual%20representations%20to%20enhance%20data%20interpretation.%20Understanding%20how%20visual%20attention%20is%20allocated%20is%20essential%20for%20optimizing%20interface%20design.%20However%2C%20collecting%20Eye-tracking%20%28ET%29%20data%20presents%20challenges%20related%20to%20cost%2C%20privacy%2C%20and%20scalability.%20Computational%20models%20provide%20alternatives%20for%20predicting%20gaze%20patterns%2C%20thereby%20advancing%20InfoVis%20research.%20In%20our%20study%2C%20we%20conducted%20an%20ET%20experiment%20with%2040%20participants%20who%20analyzed%20graphs%20while%20responding%20to%20questions%20of%20varying%20complexity%20within%20the%20context%20of%20digital%20forensics.%20We%20compared%20human%20scanpaths%20with%20synthetic%20ones%20generated%20by%20models%20such%20as%20DeepGaze%2C%20UMSS%2C%20and%20Gazeformer.%20Our%20research%20evaluates%20the%20accuracy%20of%20these%20models%20and%20examines%20how%20question%20complexity%20and%20number%20of%20nodes%20influence%20performance.%20This%20work%20contributes%20to%20the%20development%20of%20predictive%20modeling%20in%20visual%20analytics%2C%20offering%20insights%20that%20can%20enhance%20the%20design%20and%20effectiveness%20of%20InfoVis%20systems.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22%22%2C%22archiveID%22%3A%22%22%2C%22date%22%3A%222025-04-01%22%2C%22DOI%22%3A%220.1145%5C%2F3715669.3725882%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2503.24160%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22PBSPISGH%22%5D%2C%22dateModified%22%3A%222025-04-04T15%3A43%3A07Z%22%7D%7D%2C%7B%22key%22%3A%226AN8HD9Z%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Mishra%20et%20al.%22%2C%22parsedDate%22%3A%222025-04%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMishra%2C%20A.%2C%20Shukla%2C%20S.%2C%20Torres%2C%20J.%2C%20Gwizdka%2C%20J.%2C%20%26amp%3B%20Roychowdhury%2C%20S.%20%282025%29.%20Thought2Text%3A%20Text%20Generation%20from%20EEG%20Signal%20using%20Large%20Language%20Models%20%28LLMs%29.%20In%20L.%20Chiruzzo%2C%20A.%20Ritter%2C%20%26amp%3B%20L.%20Wang%20%28Eds.%29%2C%20%26lt%3Bi%26gt%3BFindings%20of%20the%20Association%20for%20Computational%20Linguistics%3A%20NAACL%202025%26lt%3B%5C%2Fi%26gt%3B%20%28pp.%203747%26%23x2013%3B3759%29.%20Association%20for%20Computational%20Linguistics.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Faclanthology.org%5C%2F2025.findings-naacl.207%5C%2F%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Faclanthology.org%5C%2F2025.findings-naacl.207%5C%2F%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3D6AN8HD9Z%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Thought2Text%3A%20Text%20Generation%20from%20EEG%20Signal%20using%20Large%20Language%20Models%20%28LLMs%29%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Abhijit%22%2C%22lastName%22%3A%22Mishra%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shreya%22%2C%22lastName%22%3A%22Shukla%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jose%22%2C%22lastName%22%3A%22Torres%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jacek%22%2C%22lastName%22%3A%22Gwizdka%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shounak%22%2C%22lastName%22%3A%22Roychowdhury%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Luis%22%2C%22lastName%22%3A%22Chiruzzo%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Alan%22%2C%22lastName%22%3A%22Ritter%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Lu%22%2C%22lastName%22%3A%22Wang%22%7D%5D%2C%22abstractNote%22%3A%22Decoding%20and%20expressing%20brain%20activity%20in%20a%20comprehensible%20form%20is%20a%20challenging%20frontier%20in%20AI.%20This%20paper%20presents%20%2AThought2Text%2A%2C%20which%20uses%20instruction-tuned%20Large%20Language%20Models%20%28LLMs%29%20fine-tuned%20with%20EEG%20data%20to%20achieve%20this%20goal.%20The%20approach%20involves%20three%20stages%3A%20%281%29%20training%20an%20EEG%20encoder%20for%20visual%20feature%20extraction%2C%20%282%29%20fine-tuning%20LLMs%20on%20image%20and%20text%20data%2C%20enabling%20multimodal%20description%20generation%2C%20and%20%283%29%20further%20fine-tuning%20on%20EEG%20embeddings%20to%20generate%20text%20directly%20from%20EEG%20during%20inference.%20Experiments%20on%20a%20public%20EEG%20dataset%20collected%20for%20six%20subjects%20with%20image%20stimuli%20and%20text%20captions%20demonstrate%20the%20efficacy%20of%20multimodal%20LLMs%20%28%2ALLaMA-v3%2A%2C%20%2AMistral-v0.3%2A%2C%20%2AQwen2.5%2A%29%2C%20validated%20using%20traditional%20language%20generation%20evaluation%20metrics%2C%20as%20well%20as%20%2Afluency%2A%20and%20%2Aadequacy%2A%20measures.%20This%20approach%20marks%20a%20significant%20advancement%20towards%20portable%2C%20low-cost%20%5Cu201cthoughts-to-text%5Cu201d%20technology%20with%20potential%20applications%20in%20both%20neuroscience%20and%20natural%20language%20processing.%22%2C%22proceedingsTitle%22%3A%22Findings%20of%20the%20Association%20for%20Computational%20Linguistics%3A%20NAACL%202025%22%2C%22conferenceName%22%3A%22Findings%202025%22%2C%22date%22%3A%222025-04%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%22%22%2C%22ISBN%22%3A%22979-8-89176-195-7%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Faclanthology.org%5C%2F2025.findings-naacl.207%5C%2F%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22IC7YILZX%22%5D%2C%22dateModified%22%3A%222025-11-24T22%3A43%3A20Z%22%7D%7D%2C%7B%22key%22%3A%22XZ3FBY42%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Pirmoradi%20et%20al.%22%2C%22parsedDate%22%3A%222025-03-24%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BPirmoradi%2C%20A.%2C%20Hoeber%2C%20O.%2C%20Harvey%2C%20M.%2C%20Momeni%2C%20M.%2C%20%26amp%3B%20Gleeson%2C%20D.%20%282025%29.%20Integrating%20Eye%20Tracking%2C%20Feature%20Use%2C%20and%20Emotional%20Valence%3A%20A%20Multimodal%20Approach%20to%20Evaluating%20Search%20Interfaces.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%26lt%3B%5C%2Fi%26gt%3B%2C%2023%26%23x2013%3B41.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716444%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716444%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DXZ3FBY42%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Integrating%20Eye%20Tracking%2C%20Feature%20Use%2C%20and%20Emotional%20Valence%3A%20A%20Multimodal%20Approach%20to%20Evaluating%20Search%20Interfaces%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Abbas%22%2C%22lastName%22%3A%22Pirmoradi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Orland%22%2C%22lastName%22%3A%22Hoeber%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Morgan%22%2C%22lastName%22%3A%22Harvey%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Milad%22%2C%22lastName%22%3A%22Momeni%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22David%22%2C%22lastName%22%3A%22Gleeson%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22conferenceName%22%3A%22CHIIR%20%2725%3A%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22date%22%3A%222025-03-24%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3698204.3716444%22%2C%22ISBN%22%3A%22979-8-4007-1290-6%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3698204.3716444%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%226UC6XQME%22%5D%2C%22dateModified%22%3A%222025-10-01T20%3A35%3A37Z%22%7D%7D%2C%7B%22key%22%3A%222ZH5HD8K%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Vizgirda%20et%20al.%22%2C%22parsedDate%22%3A%222025-03-24%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BVizgirda%2C%20V.%2C%20McNeill%2C%20F.%2C%20%26amp%3B%20Robertson%2C%20J.%20%282025%29.%20Teacher%20Online%20Educational%20Resource%20Search%20in%20Education%20System%20Context.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%26lt%3B%5C%2Fi%26gt%3B%2C%2081%26%23x2013%3B99.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716448%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716448%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3D2ZH5HD8K%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Teacher%20Online%20Educational%20Resource%20Search%20in%20Education%20System%20Context%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Vidminas%22%2C%22lastName%22%3A%22Vizgirda%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Fiona%22%2C%22lastName%22%3A%22McNeill%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Judy%22%2C%22lastName%22%3A%22Robertson%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22conferenceName%22%3A%22CHIIR%20%2725%3A%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22date%22%3A%222025-03-24%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3698204.3716448%22%2C%22ISBN%22%3A%22979-8-4007-1290-6%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3698204.3716448%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22WNART92P%22%5D%2C%22dateModified%22%3A%222025-09-15T23%3A37%3A48Z%22%7D%7D%2C%7B%22key%22%3A%22S3IPUJ8S%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Bogers%20et%20al.%22%2C%22parsedDate%22%3A%222025-03-24%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBogers%2C%20T.%2C%20Kaya%2C%20M.%2C%20%26amp%3B%20G%26%23xE4%3Bde%2C%20M.%20%282025%29.%20From%20Queries%20to%20Candidates%3A%20Exploring%20Search%20and%20Source%20Interaction%20Behavior%20of%20Recruiters.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%26lt%3B%5C%2Fi%26gt%3B%2C%20112%26%23x2013%3B128.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716450%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716450%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DS3IPUJ8S%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22From%20Queries%20to%20Candidates%3A%20Exploring%20Search%20and%20Source%20Interaction%20Behavior%20of%20Recruiters%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Toine%22%2C%22lastName%22%3A%22Bogers%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mesut%22%2C%22lastName%22%3A%22Kaya%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Maria%22%2C%22lastName%22%3A%22G%5Cu00e4de%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22conferenceName%22%3A%22CHIIR%20%2725%3A%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22date%22%3A%222025-03-24%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3698204.3716450%22%2C%22ISBN%22%3A%22979-8-4007-1290-6%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3698204.3716450%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22WNART92P%22%5D%2C%22dateModified%22%3A%222025-09-15T23%3A37%3A38Z%22%7D%7D%2C%7B%22key%22%3A%225A9RKHU5%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zerhoudi%20and%20Granitzer%22%2C%22parsedDate%22%3A%222025-03-24%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZerhoudi%2C%20S.%2C%20%26amp%3B%20Granitzer%2C%20M.%20%282025%29.%20SearchLab%3A%20Exploring%20Conversational%20and%20Traditional%20Search%20Interfaces%20in%20Information%20Retrieval.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%26lt%3B%5C%2Fi%26gt%3B%2C%20382%26%23x2013%3B389.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716475%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716475%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3D5A9RKHU5%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22SearchLab%3A%20Exploring%20Conversational%20and%20Traditional%20Search%20Interfaces%20in%20Information%20Retrieval%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Saber%22%2C%22lastName%22%3A%22Zerhoudi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Michael%22%2C%22lastName%22%3A%22Granitzer%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22conferenceName%22%3A%22CHIIR%20%2725%3A%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22date%22%3A%222025-03-24%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3698204.3716475%22%2C%22ISBN%22%3A%22979-8-4007-1290-6%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3698204.3716475%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22WNART92P%22%5D%2C%22dateModified%22%3A%222025-09-15T23%3A36%3A38Z%22%7D%7D%2C%7B%22key%22%3A%22P8FLE3PK%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Gibson%20et%20al.%22%2C%22parsedDate%22%3A%222025-03-24%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGibson%2C%20R.%20C.%2C%20Meiklem%2C%20R.%2C%20Moncur%2C%20W.%2C%20%26amp%3B%20Ruthven%2C%20I.%20%282025%29.%20Online%20Information%20Disclosure%20and%20Information%20Privacy%20Practices%20During%20Significant%20Life%20Transitions%3A%20A%20Scoping%20Review.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%26lt%3B%5C%2Fi%26gt%3B%2C%2042%26%23x2013%3B56.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716445%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716445%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DP8FLE3PK%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Online%20Information%20Disclosure%20and%20Information%20Privacy%20Practices%20During%20Significant%20Life%20Transitions%3A%20A%20Scoping%20Review%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ryan%20Colin%22%2C%22lastName%22%3A%22Gibson%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ramsay%22%2C%22lastName%22%3A%22Meiklem%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wendy%22%2C%22lastName%22%3A%22Moncur%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ian%22%2C%22lastName%22%3A%22Ruthven%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22conferenceName%22%3A%22CHIIR%20%2725%3A%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22date%22%3A%222025-03-24%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3698204.3716445%22%2C%22ISBN%22%3A%22979-8-4007-1290-6%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3698204.3716445%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22WNART92P%22%5D%2C%22dateModified%22%3A%222025-09-15T23%3A36%3A30Z%22%7D%7D%2C%7B%22key%22%3A%22NYBIYGJ7%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Emter%20and%20Chavula%22%2C%22parsedDate%22%3A%222025-03-24%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BEmter%2C%20F.%2C%20%26amp%3B%20Chavula%2C%20C.%20%282025%29.%20Seeking%20Control%3A%20How%20Women%20Evaluate%20and%20Use%20Menopause%20Related%20Information.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%26lt%3B%5C%2Fi%26gt%3B%2C%20179%26%23x2013%3B194.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716455%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716455%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DNYBIYGJ7%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Seeking%20Control%3A%20How%20Women%20Evaluate%20and%20Use%20Menopause%20Related%20Information%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Francesca%22%2C%22lastName%22%3A%22Emter%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Catherine%22%2C%22lastName%22%3A%22Chavula%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22conferenceName%22%3A%22CHIIR%20%2725%3A%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22date%22%3A%222025-03-24%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3698204.3716455%22%2C%22ISBN%22%3A%22979-8-4007-1290-6%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3698204.3716455%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22WNART92P%22%5D%2C%22dateModified%22%3A%222025-09-15T23%3A36%3A24Z%22%7D%7D%2C%7B%22key%22%3A%22ZE4C369A%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Zhang%20et%20al.%22%2C%22parsedDate%22%3A%222025-03-24%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BZhang%2C%20J.%2C%20Xie%2C%20Y.%2C%20Ma%2C%20J.%2C%20Zheng%2C%20Z.%2C%20Zhai%2C%20S.%2C%20%26amp%3B%20Wang%2C%20P.%20%282025%29.%20%26%23x201C%3BI%26%23x2019%3Bm%20Looking%20for%20a%20Book%20with%20a%20Big%20Watermelon%20and%20Many%20Small%20Ants%26%23x201D%3B%3A%20An%20Exploratory%20Study%20on%20a%20Visualized%20Search%20System%20for%20Preschoolers%26%23x2019%3B%20Picture%20Book%20Search.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%26lt%3B%5C%2Fi%26gt%3B%2C%20342%26%23x2013%3B347.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716469%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716469%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DZE4C369A%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22%5Cu201cI%27m%20Looking%20for%20a%20Book%20with%20a%20Big%20Watermelon%20and%20Many%20Small%20Ants%5Cu201d%3A%20An%20Exploratory%20Study%20on%20a%20Visualized%20Search%20System%20for%20Preschoolers%27%20Picture%20Book%20Search%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jinghan%22%2C%22lastName%22%3A%22Zhang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yi%22%2C%22lastName%22%3A%22Xie%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Junyang%22%2C%22lastName%22%3A%22Ma%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zhihan%22%2C%22lastName%22%3A%22Zheng%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shenkang%22%2C%22lastName%22%3A%22Zhai%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Pianran%22%2C%22lastName%22%3A%22Wang%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22conferenceName%22%3A%22CHIIR%20%2725%3A%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22date%22%3A%222025-03-24%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3698204.3716469%22%2C%22ISBN%22%3A%22979-8-4007-1290-6%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3698204.3716469%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22WNART92P%22%5D%2C%22dateModified%22%3A%222025-09-15T23%3A36%3A09Z%22%7D%7D%2C%7B%22key%22%3A%22N3IJK4ZB%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Van%20Der%20Sluis%20and%20Azzopardi%22%2C%22parsedDate%22%3A%222025-03-24%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BVan%20Der%20Sluis%2C%20F.%2C%20%26amp%3B%20Azzopardi%2C%20L.%20%282025%29.%20Search%20Changes%20Consumers%26%23x2019%3B%20Minds%3A%20How%20Recognizing%20Gaps%20Drives%20Sustainable%20Choices%3A%20How%20Recognizing%20Gaps%20Drives%20Sustainable%20Choices.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%26lt%3B%5C%2Fi%26gt%3B%2C%20195%26%23x2013%3B207.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716456%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716456%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DN3IJK4ZB%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Search%20Changes%20Consumers%27%20Minds%3A%20How%20Recognizing%20Gaps%20Drives%20Sustainable%20Choices%3A%20How%20Recognizing%20Gaps%20Drives%20Sustainable%20Choices%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Frans%22%2C%22lastName%22%3A%22Van%20Der%20Sluis%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Leif%22%2C%22lastName%22%3A%22Azzopardi%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22conferenceName%22%3A%22CHIIR%20%2725%3A%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22date%22%3A%222025-03-24%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3698204.3716456%22%2C%22ISBN%22%3A%22979-8-4007-1290-6%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3698204.3716456%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22WNART92P%22%5D%2C%22dateModified%22%3A%222025-09-15T23%3A36%3A01Z%22%7D%7D%2C%7B%22key%22%3A%22RQ7VUA7H%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Bahl%20et%20al.%22%2C%22parsedDate%22%3A%222025-03-24%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBahl%2C%20R.%2C%20Chang%2C%20S.%2C%20McKay%2C%20D.%2C%20Buchanan%2C%20G.%2C%20%26amp%3B%20Cheong%2C%20M.%20%282025%29.%20A%20Whole%20New%20World%3A%20Migrant%20Journeys%20Through%20Digital%20Information%20Landscapes.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%26lt%3B%5C%2Fi%26gt%3B%2C%20247%26%23x2013%3B262.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716460%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716460%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DRQ7VUA7H%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22A%20Whole%20New%20World%3A%20Migrant%20Journeys%20Through%20Digital%20Information%20Landscapes%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rashika%22%2C%22lastName%22%3A%22Bahl%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shanton%22%2C%22lastName%22%3A%22Chang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Dana%22%2C%22lastName%22%3A%22McKay%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22George%22%2C%22lastName%22%3A%22Buchanan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marc%22%2C%22lastName%22%3A%22Cheong%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22conferenceName%22%3A%22CHIIR%20%2725%3A%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22date%22%3A%222025-03-24%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3698204.3716460%22%2C%22ISBN%22%3A%22979-8-4007-1290-6%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3698204.3716460%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22WNART92P%22%5D%2C%22dateModified%22%3A%222025-09-15T23%3A35%3A51Z%22%7D%7D%2C%7B%22key%22%3A%22IM6R4TH4%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Siro%20et%20al.%22%2C%22parsedDate%22%3A%222025-03-24%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSiro%2C%20C.%2C%20Abbasiantaeb%2C%20Z.%2C%20Yuan%2C%20Y.%2C%20Aliannejadi%2C%20M.%2C%20%26amp%3B%20De%20Rijke%2C%20M.%20%282025%29.%20Do%20Images%20Clarify%3F%20A%20Study%20on%20the%20Effect%20of%20Images%20on%20Clarifying%20Questions%20in%20Conversational%20Search.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%26lt%3B%5C%2Fi%26gt%3B%2C%20273%26%23x2013%3B291.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716464%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716464%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DIM6R4TH4%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Do%20Images%20Clarify%3F%20A%20Study%20on%20the%20Effect%20of%20Images%20on%20Clarifying%20Questions%20in%20Conversational%20Search%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Clemencia%22%2C%22lastName%22%3A%22Siro%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Zahra%22%2C%22lastName%22%3A%22Abbasiantaeb%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yifei%22%2C%22lastName%22%3A%22Yuan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mohammad%22%2C%22lastName%22%3A%22Aliannejadi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Maarten%22%2C%22lastName%22%3A%22De%20Rijke%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22conferenceName%22%3A%22CHIIR%20%2725%3A%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22date%22%3A%222025-03-24%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3698204.3716464%22%2C%22ISBN%22%3A%22979-8-4007-1290-6%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3698204.3716464%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22WNART92P%22%5D%2C%22dateModified%22%3A%222025-09-15T23%3A35%3A21Z%22%7D%7D%2C%7B%22key%22%3A%22R2E47UUX%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Mayerhofer%20et%20al.%22%2C%22parsedDate%22%3A%222025-03-24%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMayerhofer%2C%20K.%2C%20Capra%2C%20R.%2C%20%26amp%3B%20Elsweiler%2C%20D.%20%282025%29.%20Blending%20Queries%20and%20Conversations%3A%20Understanding%20Trust%2C%20Verification%2C%20and%20System%20Choice%20in%20Search%20and%20Chat%20Interactions.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%26lt%3B%5C%2Fi%26gt%3B%2C%20168%26%23x2013%3B178.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716454%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716454%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DR2E47UUX%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Blending%20Queries%20and%20Conversations%3A%20Understanding%20Trust%2C%20Verification%2C%20and%20System%20Choice%20in%20Search%20and%20Chat%20Interactions%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kerstin%22%2C%22lastName%22%3A%22Mayerhofer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rob%22%2C%22lastName%22%3A%22Capra%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22David%22%2C%22lastName%22%3A%22Elsweiler%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22conferenceName%22%3A%22CHIIR%20%2725%3A%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22date%22%3A%222025-03-24%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3698204.3716454%22%2C%22ISBN%22%3A%22979-8-4007-1290-6%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3698204.3716454%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22WNART92P%22%5D%2C%22dateModified%22%3A%222025-09-15T23%3A35%3A03Z%22%7D%7D%2C%7B%22key%22%3A%22ZTA5NGPC%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Bogers%20et%20al.%22%2C%22parsedDate%22%3A%222025-03-24%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BBogers%2C%20T.%2C%20G%26%23xE4%3Bde%2C%20M.%2C%20Hall%2C%20M.%2C%20Koolen%2C%20M.%2C%20Petras%2C%20V.%2C%20%26amp%3B%20Skov%2C%20M.%20%282025%29.%20Exploring%20the%20Zero-Shot%20Known-Item%20Retrieval%20Capabilities%20of%20LLMs%20for%20Casual%20Leisure%20Information%20Needs.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%26lt%3B%5C%2Fi%26gt%3B%2C%20316%26%23x2013%3B325.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716466%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716466%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DZTA5NGPC%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Exploring%20the%20Zero-Shot%20Known-Item%20Retrieval%20Capabilities%20of%20LLMs%20for%20Casual%20Leisure%20Information%20Needs%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Toine%22%2C%22lastName%22%3A%22Bogers%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Maria%22%2C%22lastName%22%3A%22G%5Cu00e4de%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mark%22%2C%22lastName%22%3A%22Hall%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Marijn%22%2C%22lastName%22%3A%22Koolen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Vivien%22%2C%22lastName%22%3A%22Petras%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mette%22%2C%22lastName%22%3A%22Skov%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22conferenceName%22%3A%22CHIIR%20%2725%3A%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22date%22%3A%222025-03-24%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3698204.3716466%22%2C%22ISBN%22%3A%22979-8-4007-1290-6%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3698204.3716466%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22WNART92P%22%5D%2C%22dateModified%22%3A%222025-09-15T23%3A34%3A53Z%22%7D%7D%2C%7B%22key%22%3A%22875N5JMF%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Sun%20et%20al.%22%2C%22parsedDate%22%3A%222025-03-24%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSun%2C%20R.%2C%20Kong%2C%20R.%2C%20Milton%2C%20A.%2C%20Kluver%2C%20D.%2C%20Paterson%2C%20I.%2C%20%26amp%3B%20Konstan%2C%20J.%20A.%20%282025%29.%20Why%20They%20Come%20And%20Go%3A%20A%20Case%20Study%20of%20Productive%20Flyby%20Users%20and%20Their%20Rating%20Integrity%20Challenge%20in%20Movie%20Recommenders.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%26lt%3B%5C%2Fi%26gt%3B%2C%201%26%23x2013%3B11.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716442%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716442%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3D875N5JMF%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Why%20They%20Come%20And%20Go%3A%20A%20Case%20Study%20of%20Productive%20Flyby%20Users%20and%20Their%20Rating%20Integrity%20Challenge%20in%20Movie%20Recommenders%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ruixuan%22%2C%22lastName%22%3A%22Sun%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ruoyan%22%2C%22lastName%22%3A%22Kong%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ashlee%22%2C%22lastName%22%3A%22Milton%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Daniel%22%2C%22lastName%22%3A%22Kluver%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ian%22%2C%22lastName%22%3A%22Paterson%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Joseph%20A.%22%2C%22lastName%22%3A%22Konstan%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22conferenceName%22%3A%22CHIIR%20%2725%3A%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22date%22%3A%222025-03-24%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3698204.3716442%22%2C%22ISBN%22%3A%22979-8-4007-1290-6%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3698204.3716442%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22WNART92P%22%5D%2C%22dateModified%22%3A%222025-09-15T23%3A25%3A29Z%22%7D%7D%2C%7B%22key%22%3A%22ZY646TF7%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Yang%20et%20al.%22%2C%22parsedDate%22%3A%222025-03-24%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BYang%2C%20Y.%2C%20Capra%2C%20R.%2C%20%26amp%3B%20Guo%2C%20M.%20%282025%29.%20Beyond%20the%20Surface%3A%20Investigating%20Explicit%20and%20Implicit%20Perceptions%20of%20Music%20Diversity.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%26lt%3B%5C%2Fi%26gt%3B%2C%20310%26%23x2013%3B315.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716465%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716465%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DZY646TF7%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Beyond%20the%20Surface%3A%20Investigating%20Explicit%20and%20Implicit%20Perceptions%20of%20Music%20Diversity%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yuyu%22%2C%22lastName%22%3A%22Yang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Rob%22%2C%22lastName%22%3A%22Capra%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mengtian%22%2C%22lastName%22%3A%22Guo%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22conferenceName%22%3A%22CHIIR%20%2725%3A%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22date%22%3A%222025-03-24%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3698204.3716465%22%2C%22ISBN%22%3A%22979-8-4007-1290-6%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3698204.3716465%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22WNART92P%22%5D%2C%22dateModified%22%3A%222025-09-15T23%3A21%3A08Z%22%7D%7D%2C%7B%22key%22%3A%22YZ7KG5I3%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Orin%20and%20Hoeber%22%2C%22parsedDate%22%3A%222025-03-24%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BOrin%2C%20K.%2C%20%26amp%3B%20Hoeber%2C%20O.%20%282025%29.%20Organizing%20Found%20Information%20in%20Public%20Digital%20Library%20Search.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%26lt%3B%5C%2Fi%26gt%3B%2C%20358%26%23x2013%3B366.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716471%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716471%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DYZ7KG5I3%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Organizing%20Found%20Information%20in%20Public%20Digital%20Library%20Search%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Kimeya%22%2C%22lastName%22%3A%22Orin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Orland%22%2C%22lastName%22%3A%22Hoeber%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22conferenceName%22%3A%22CHIIR%20%2725%3A%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22date%22%3A%222025-03-24%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3698204.3716471%22%2C%22ISBN%22%3A%22979-8-4007-1290-6%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3698204.3716471%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22WNART92P%22%5D%2C%22dateModified%22%3A%222025-09-15T23%3A20%3A58Z%22%7D%7D%2C%7B%22key%22%3A%22K628EYCJ%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Liu%20and%20He%22%2C%22parsedDate%22%3A%222025-03-24%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLiu%2C%20J.%2C%20%26amp%3B%20He%2C%20J.%20%282025%29.%20Boundedly%20Rational%20Searchers%20Interacting%20with%20Medical%20Misinformation%3A%20Characterizing%20Context-Dependent%20Decoy%20Effects%20on%20Credibility%20and%20Usefulness%20Evaluation%20in%20Sessions.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%26lt%3B%5C%2Fi%26gt%3B%2C%20154%26%23x2013%3B167.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716453%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716453%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DK628EYCJ%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Boundedly%20Rational%20Searchers%20Interacting%20with%20Medical%20Misinformation%3A%20Characterizing%20Context-Dependent%20Decoy%20Effects%20on%20Credibility%20and%20Usefulness%20Evaluation%20in%20Sessions%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiqun%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jiangen%22%2C%22lastName%22%3A%22He%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22conferenceName%22%3A%22CHIIR%20%2725%3A%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22date%22%3A%222025-03-24%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3698204.3716453%22%2C%22ISBN%22%3A%22979-8-4007-1290-6%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3698204.3716453%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22WNART92P%22%5D%2C%22dateModified%22%3A%222025-09-15T23%3A18%3A32Z%22%7D%7D%2C%7B%22key%22%3A%223P2LRSFZ%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Choi%20and%20Arguello%22%2C%22parsedDate%22%3A%222025-03-24%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BChoi%2C%20B.%2C%20%26amp%3B%20Arguello%2C%20J.%20%282025%29.%20The%20Effects%20of%20Working%20Memory%20during%20a%20Search%20and%20Sensemaking%20Task.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%26lt%3B%5C%2Fi%26gt%3B%2C%20100%26%23x2013%3B111.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716449%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716449%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3D3P2LRSFZ%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22The%20Effects%20of%20Working%20Memory%20during%20a%20Search%20and%20Sensemaking%20Task%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Bogeum%22%2C%22lastName%22%3A%22Choi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jaime%22%2C%22lastName%22%3A%22Arguello%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22conferenceName%22%3A%22CHIIR%20%2725%3A%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22date%22%3A%222025-03-24%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3698204.3716449%22%2C%22ISBN%22%3A%22979-8-4007-1290-6%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3698204.3716449%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22WNART92P%22%5D%2C%22dateModified%22%3A%222025-09-15T23%3A18%3A10Z%22%7D%7D%2C%7B%22key%22%3A%229D6GTAUA%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Liu%20et%20al.%22%2C%22parsedDate%22%3A%222025-03-24%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BLiu%2C%20H.%2C%20Zhao%2C%20S.%2C%20Wang%2C%20S.%2C%20Hansen%2C%20P.%2C%20Oakley%2C%20I.%2C%20%26amp%3B%20Le%2C%20K.-D.%20%282025%29.%20Designing%20Interactive%20Multimodal%20Information%20Retrieval%20and%20Access%20for%20Heads%20Up%20Computing%20%28DIMIRA-HUC%29.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%26lt%3B%5C%2Fi%26gt%3B%2C%20416%26%23x2013%3B418.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716482%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716482%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3D9D6GTAUA%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Designing%20Interactive%20Multimodal%20Information%20Retrieval%20and%20Access%20for%20Heads%20Up%20Computing%20%28DIMIRA-HUC%29%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Haiming%22%2C%22lastName%22%3A%22Liu%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shengdong%22%2C%22lastName%22%3A%22Zhao%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Silang%22%2C%22lastName%22%3A%22Wang%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Preben%22%2C%22lastName%22%3A%22Hansen%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ian%22%2C%22lastName%22%3A%22Oakley%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Khanh-Duy%22%2C%22lastName%22%3A%22Le%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22conferenceName%22%3A%22CHIIR%20%2725%3A%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22date%22%3A%222025-03-24%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3698204.3716482%22%2C%22ISBN%22%3A%22979-8-4007-1290-6%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3698204.3716482%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22WNART92P%22%5D%2C%22dateModified%22%3A%222025-09-15T23%3A17%3A23Z%22%7D%7D%2C%7B%22key%22%3A%22MHBI9BFK%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Hoeber%22%2C%22parsedDate%22%3A%222025-03-24%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BHoeber%2C%20O.%20%282025%29.%20Design%20Principles%20for%20Exploratory%20Search%20Interfaces.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%26lt%3B%5C%2Fi%26gt%3B%2C%2012%26%23x2013%3B22.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716443%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716443%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DMHBI9BFK%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Design%20Principles%20for%20Exploratory%20Search%20Interfaces%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Orland%22%2C%22lastName%22%3A%22Hoeber%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22conferenceName%22%3A%22CHIIR%20%2725%3A%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22date%22%3A%222025-03-24%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3698204.3716443%22%2C%22ISBN%22%3A%22979-8-4007-1290-6%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3698204.3716443%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22WNART92P%22%5D%2C%22dateModified%22%3A%222025-09-15T23%3A17%3A06Z%22%7D%7D%2C%7B%22key%22%3A%22Y6WNN68B%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Pirmoradi%20et%20al.%22%2C%22parsedDate%22%3A%222025-03-24%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BPirmoradi%2C%20A.%2C%20Hoeber%2C%20O.%2C%20Harvey%2C%20M.%2C%20Momeni%2C%20M.%2C%20%26amp%3B%20Gleeson%2C%20D.%20%282025%29.%20Integrating%20Eye%20Tracking%2C%20Feature%20Use%2C%20and%20Emotional%20Valence%3A%20A%20Multimodal%20Approach%20to%20Evaluating%20Search%20Interfaces.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%26lt%3B%5C%2Fi%26gt%3B%2C%2023%26%23x2013%3B41.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716444%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716444%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DY6WNN68B%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Integrating%20Eye%20Tracking%2C%20Feature%20Use%2C%20and%20Emotional%20Valence%3A%20A%20Multimodal%20Approach%20to%20Evaluating%20Search%20Interfaces%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Abbas%22%2C%22lastName%22%3A%22Pirmoradi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Orland%22%2C%22lastName%22%3A%22Hoeber%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Morgan%22%2C%22lastName%22%3A%22Harvey%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Milad%22%2C%22lastName%22%3A%22Momeni%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22David%22%2C%22lastName%22%3A%22Gleeson%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22conferenceName%22%3A%22CHIIR%20%2725%3A%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22date%22%3A%222025-03-24%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3698204.3716444%22%2C%22ISBN%22%3A%22979-8-4007-1290-6%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3698204.3716444%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22WNART92P%22%5D%2C%22dateModified%22%3A%222025-09-15T23%3A16%3A54Z%22%7D%7D%2C%7B%22key%22%3A%22SN3SDD7R%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Azzopardi%20et%20al.%22%2C%22parsedDate%22%3A%222025-03-24%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BAzzopardi%2C%20L.%2C%20Nicol%2C%20E.%2C%20Briggs%2C%20J.%2C%20Moncur%2C%20W.%2C%20Schafer%2C%20B.%2C%20Nash%2C%20C.%2C%20%26amp%3B%20Duheric%2C%20M.%20%282025%29.%20Assessing%20Risks%20in%20Online%20Information%20Sharing.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%26lt%3B%5C%2Fi%26gt%3B%2C%2071%26%23x2013%3B80.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716447%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716447%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DSN3SDD7R%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Assessing%20Risks%20in%20Online%20Information%20Sharing%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Leif%22%2C%22lastName%22%3A%22Azzopardi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Emma%22%2C%22lastName%22%3A%22Nicol%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jo%22%2C%22lastName%22%3A%22Briggs%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Wendy%22%2C%22lastName%22%3A%22Moncur%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Burkhard%22%2C%22lastName%22%3A%22Schafer%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Callum%22%2C%22lastName%22%3A%22Nash%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Melissa%22%2C%22lastName%22%3A%22Duheric%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22conferenceName%22%3A%22CHIIR%20%2725%3A%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22date%22%3A%222025-03-24%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3698204.3716447%22%2C%22ISBN%22%3A%22979-8-4007-1290-6%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3698204.3716447%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22WNART92P%22%5D%2C%22dateModified%22%3A%222025-09-15T23%3A16%3A39Z%22%7D%7D%2C%7B%22key%22%3A%22VQL7VHS4%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Nasser%20et%20al.%22%2C%22parsedDate%22%3A%222025-03-24%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BNasser%2C%20H.%2C%20Da%20Costa%20Pereira%2C%20C.%2C%20Escazut%2C%20C.%2C%20%26amp%3B%20Tettamanzi%2C%20A.%20%282025%29.%20Personalized%20Knowledge%20Gain%20Estimation%20Through%20Query-Driven%20Learning%20Goal%20Inference%20in%20Search%20As%20Learning.%20%26lt%3Bi%26gt%3BProceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%26lt%3B%5C%2Fi%26gt%3B%2C%20263%26%23x2013%3B272.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716463%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1145%5C%2F3698204.3716463%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DVQL7VHS4%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Personalized%20Knowledge%20Gain%20Estimation%20Through%20Query-Driven%20Learning%20Goal%20Inference%20in%20Search%20As%20Learning%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hadi%22%2C%22lastName%22%3A%22Nasser%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22C%5Cu00e9lia%22%2C%22lastName%22%3A%22Da%20Costa%20Pereira%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Cathy%22%2C%22lastName%22%3A%22Escazut%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Andrea%22%2C%22lastName%22%3A%22Tettamanzi%22%7D%5D%2C%22abstractNote%22%3A%22%22%2C%22proceedingsTitle%22%3A%22Proceedings%20of%20the%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22conferenceName%22%3A%22CHIIR%20%2725%3A%202025%20ACM%20SIGIR%20Conference%20on%20Human%20Information%20Interaction%20and%20Retrieval%22%2C%22date%22%3A%222025-03-24%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1145%5C%2F3698204.3716463%22%2C%22ISBN%22%3A%22979-8-4007-1290-6%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdl.acm.org%5C%2Fdoi%5C%2F10.1145%5C%2F3698204.3716463%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22WNART92P%22%5D%2C%22dateModified%22%3A%222025-09-15T23%3A15%3A37Z%22%7D%7D%2C%7B%22key%22%3A%224D5IG2ML%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22de%20la%20Mora%20Velasco%20and%20Moreno%22%2C%22parsedDate%22%3A%222025-03-17%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3Bde%20la%20Mora%20Velasco%2C%20E.%2C%20%26amp%3B%20Moreno%2C%20M.%20%282025%29.%20Music%20and%20online%20learning%3A%20new%20perspectives%20and%20directions.%20%26lt%3Bi%26gt%3BEducational%20Technology%20Research%20and%20Development%26lt%3B%5C%2Fi%26gt%3B.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs11423-025-10491-0%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs11423-025-10491-0%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3D4D5IG2ML%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Music%20and%20online%20learning%3A%20new%20perspectives%20and%20directions%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Efren%22%2C%22lastName%22%3A%22de%20la%20Mora%20Velasco%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Matthew%22%2C%22lastName%22%3A%22Moreno%22%7D%5D%2C%22abstractNote%22%3A%22The%20measurable%20effects%20of%20music%20in%20online%20learning%20remains%20a%20topic%20of%20extensive%20debate%2C%20largely%20due%20to%20inconsistent%20findings%20within%20existing%20literature.%20Many%20of%20these%20inconclusive%20results%20stem%20from%20research%20methodologies%20that%20focus%20on%20singular%20perspectives%2C%20often%20overlooking%20a%20balance%20between%20cognitive%20challenges%20and%20emotional%20benefits%20of%20background%20music.%20Consequently%2C%20educators%20and%20instructional%20designers%20frequently%20overlook%20the%20potential%20of%20music%20to%20enhance%20learning%20experiences%20and%20offer%20new%20resources%20to%20explore%20emotion%20regulation.%20This%20perspective%20article%20offers%20a%20comprehensive%20and%20balanced%20examination%20of%20leveraging%20music%5Cu2019s%20benefits%20in%20online%20education.%20In%20this%20article%2C%20we%20synthesize%20recent%20theories%20across%20music%20cognition%2C%20educational%20psychology%2C%20instructional%20technology%2C%20empirical%20studies%2C%20and%20documented%20applications%20of%20music%20within%20online%20instructional%20contexts.%20Based%20on%20this%20analysis%2C%20we%20present%20three%20evidence-based%20propositions%3A%20%281%29%20a%20refined%20theoretical%20framework%20elucidating%20music%5Cu2019s%20influence%20on%20behavior%20and%20learning%20outcomes%2C%20%282%29%20four%20strategies%20to%20harness%20music%20for%20enhancing%20learner%20motivation%2C%20engagement%2C%20and%20comprehension%2C%20%283%29%20guidelines%20and%20reflections%20for%20effectively%20integrating%20music%20into%20instructional%20design.%20The%20manuscript%20concludes%20with%20suggestions%20for%20mobilizing%20the%20four%20propositions%20and%20leveraging%20generative%20AI%20to%20enhance%20the%20role%20of%20music%20in%20learning%2C%20and%20the%20further%20steps%20that%20researchers%20and%20practitioners%20can%20take%20to%20implement%20music%20in%20online%20learning.%22%2C%22date%22%3A%222025-03-17%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1007%5C%2Fs11423-025-10491-0%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2Fs11423-025-10491-0%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%221556-6501%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22PFABH3CI%22%5D%2C%22dateModified%22%3A%222025-06-06T13%3A52%3A02Z%22%7D%7D%2C%7B%22key%22%3A%227UWPNKT5%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Shukla%20et%20al.%22%2C%22parsedDate%22%3A%222025-02-17%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BShukla%2C%20S.%2C%20Torres%2C%20J.%2C%20Mishra%2C%20A.%2C%20Gwizdka%2C%20J.%2C%20%26amp%3B%20Roychowdhury%2C%20S.%20%282025%2C%20February%2017%29.%20%26lt%3Bi%26gt%3BA%20Survey%20on%20Bridging%20EEG%20Signals%20and%20Generative%20AI%3A%20From%20Image%20and%20Text%20to%20Beyond%26lt%3B%5C%2Fi%26gt%3B.%20arXiv.Org.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2502.12048v1%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2502.12048v1%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3D7UWPNKT5%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22webpage%22%2C%22title%22%3A%22A%20Survey%20on%20Bridging%20EEG%20Signals%20and%20Generative%20AI%3A%20From%20Image%20and%20Text%20to%20Beyond%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shreya%22%2C%22lastName%22%3A%22Shukla%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jose%22%2C%22lastName%22%3A%22Torres%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Abhijit%22%2C%22lastName%22%3A%22Mishra%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jacek%22%2C%22lastName%22%3A%22Gwizdka%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Shounak%22%2C%22lastName%22%3A%22Roychowdhury%22%7D%5D%2C%22abstractNote%22%3A%22Integration%20of%20Brain-Computer%20Interfaces%20%28BCIs%29%20and%20Generative%20Artificial%20Intelligence%20%28GenAI%29%20has%20opened%20new%20frontiers%20in%20brain%20signal%20decoding%2C%20enabling%20assistive%20communication%2C%20neural%20representation%20learning%2C%20and%20multimodal%20integration.%20BCIs%2C%20particularly%20those%20leveraging%20Electroencephalography%20%28EEG%29%2C%20provide%20a%20non-invasive%20means%20of%20translating%20neural%20activity%20into%20meaningful%20outputs.%20Recent%20advances%20in%20deep%20learning%2C%20including%20Generative%20Adversarial%20Networks%20%28GANs%29%20and%20Transformer-based%20Large%20Language%20Models%20%28LLMs%29%2C%20have%20significantly%20improved%20EEG-based%20generation%20of%20images%2C%20text%2C%20and%20speech.%20This%20paper%20provides%20a%20literature%20review%20of%20the%20state-of-the-art%20in%20EEG-based%20multimodal%20generation%2C%20focusing%20on%20%28i%29%20EEG-to-image%20generation%20through%20GANs%2C%20Variational%20Autoencoders%20%28VAEs%29%2C%20and%20Diffusion%20Models%2C%20and%20%28ii%29%20EEG-to-text%20generation%20leveraging%20Transformer%20based%20language%20models%20and%20contrastive%20learning%20methods.%20Additionally%2C%20we%20discuss%20the%20emerging%20domain%20of%20EEG-to-speech%20synthesis%2C%20an%20evolving%20multimodal%20frontier.%20We%20highlight%20key%20datasets%2C%20use%20cases%2C%20challenges%2C%20and%20EEG%20feature%20encoding%20methods%20that%20underpin%20generative%20approaches.%20By%20providing%20a%20structured%20overview%20of%20EEG-based%20generative%20AI%2C%20this%20survey%20aims%20to%20equip%20researchers%20and%20practitioners%20with%20insights%20to%20advance%20neural%20decoding%2C%20enhance%20assistive%20technologies%2C%20and%20expand%20the%20frontiers%20of%20brain-computer%20interaction.%22%2C%22date%22%3A%222025%5C%2F02%5C%2F17%22%2C%22DOI%22%3A%22%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2502.12048v1%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%228384K4BB%22%2C%22IC7YILZX%22%5D%2C%22dateModified%22%3A%222025-02-19T00%3A28%3A37Z%22%7D%7D%2C%7B%22key%22%3A%22NFL3GDWR%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A11531701%2C%22username%22%3A%22Dawn202209%22%2C%22name%22%3A%22%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fdawn202209%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Ekin%20et%20al.%22%2C%22parsedDate%22%3A%222025-02-12%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BEkin%2C%20M.%2C%20Krejtz%2C%20K.%2C%20Duarte%2C%20C.%2C%20Duchowski%2C%20A.%20T.%2C%20%26amp%3B%20Krejtz%2C%20I.%20%282025%29.%20Prediction%20of%20intrinsic%20and%20extraneous%20cognitive%20load%20with%20oculometric%20and%20biometric%20indicators.%20%26lt%3Bi%26gt%3BScientific%20Reports%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B15%26lt%3B%5C%2Fi%26gt%3B%281%29%2C%205213.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1038%5C%2Fs41598-025-89336-y%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1038%5C%2Fs41598-025-89336-y%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DNFL3GDWR%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Prediction%20of%20intrinsic%20and%20extraneous%20cognitive%20load%20with%20oculometric%20and%20biometric%20indicators%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Merve%22%2C%22lastName%22%3A%22Ekin%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Krzysztof%22%2C%22lastName%22%3A%22Krejtz%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Carlos%22%2C%22lastName%22%3A%22Duarte%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Andrew%20T.%22%2C%22lastName%22%3A%22Duchowski%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Izabela%22%2C%22lastName%22%3A%22Krejtz%22%7D%5D%2C%22abstractNote%22%3A%22This%20study%20focused%20on%20the%20prediction%20of%20intrinsic%20and%20extraneous%20cognitive%20load%20using%20eye-tracking%20metrics%2C%20heart%20rate%20variability%2C%20and%20galvanic%20skin%20response.%20Intrinsic%20cognitive%20load%20is%20associated%20with%20the%20inherent%20complexity%20of%20the%20mental%20task%2C%20whereas%20extraneous%20cognitive%20load%20is%20related%20to%20the%20distracting%20and%20unrelated%20elements%20in%20the%20task.%20Thirty-three%20participants%20%28aged%20%24%2421.24%20%5C%5Cpm%203.51%24%24%29%20performed%20different%20levels%20of%20mental%20calculations%20to%20induce%20intrinsic%20cognitive%20load%20in%20the%20first%20task%20and%20a%20visual%20search%20task%20to%20manipulate%20extraneous%20cognitive%20load%20in%20the%20second%20task.%20During%20both%20tasks%2C%20participants%5Cu2019%20eye%20movements%2C%20heart%20rate%2C%20and%20galvanic%20skin%20response%20were%20continuously%20recorded.%20Participants%5Cu2019%20working%20memory%20was%20controlled.%20Subjective%20cognitive%20load%20was%20also%20assessed%20following%20each%20experimental%20task.%20A%20discriminant%20model%2C%20consisting%20of%20oculo-%20and%20bio-metric%20indicators%2C%20could%20discriminate%20between%20cognitive%20loads%20%28intrinsic%20vs.extraneous%29%20and%20levels%20%28low%20vs.high%29.%20In%20particular%2C%20average%20fixation%20duration%2C%20average%20saccade%20amplitude%2C%20and%20%24%24%5C%5Cmathscr%20%7BK%7D%24%24coefficient%20each%20have%20an%20impact%20on%20the%20model.%20In%20addition%2C%20task%20difficulty%20may%20be%20distinguished%20by%20the%20Low-High%20Index%20of%20Pupillary%20Activity%20%28LHIPA%29%20and%20heart%20rate%20variability.%22%2C%22date%22%3A%222025-02-12%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1038%5C%2Fs41598-025-89336-y%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.nature.com%5C%2Farticles%5C%2Fs41598-025-89336-y%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%222045-2322%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22PGU62QBY%22%5D%2C%22dateModified%22%3A%222025-03-05T20%3A53%3A33Z%22%7D%7D%2C%7B%22key%22%3A%22SBD3NXKT%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A11531701%2C%22username%22%3A%22Dawn202209%22%2C%22name%22%3A%22%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fdawn202209%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Medeiros%20et%20al.%22%2C%22parsedDate%22%3A%222025-01-27%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BMedeiros%2C%20J.%2C%20Bernardes%2C%20A.%2C%20Couceiro%2C%20R.%2C%20Oliveira%2C%20P.%2C%20Madeira%2C%20H.%2C%20Teixeira%2C%20C.%2C%20%26amp%3B%20Carvalho%2C%20P.%20%282025%29.%20Optimal%20frequency%20bands%20for%20pupillography%20for%20maximal%20correlation%20with%20HRV.%20%26lt%3Bi%26gt%3BScientific%20Reports%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B15%26lt%3B%5C%2Fi%26gt%3B%281%29%2C%203361.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1038%5C%2Fs41598-025-85663-2%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1038%5C%2Fs41598-025-85663-2%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DSBD3NXKT%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Optimal%20frequency%20bands%20for%20pupillography%20for%20maximal%20correlation%20with%20HRV%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22J%5Cu00falio%22%2C%22lastName%22%3A%22Medeiros%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Andr%5Cu00e9%22%2C%22lastName%22%3A%22Bernardes%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Ricardo%22%2C%22lastName%22%3A%22Couceiro%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paulo%22%2C%22lastName%22%3A%22Oliveira%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Henrique%22%2C%22lastName%22%3A%22Madeira%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22C%5Cu00e9sar%22%2C%22lastName%22%3A%22Teixeira%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Paulo%22%2C%22lastName%22%3A%22Carvalho%22%7D%5D%2C%22abstractNote%22%3A%22Assessing%20cognitive%20load%20using%20pupillography%20frequency%20features%20presents%20a%20persistent%20challenge%20due%20to%20the%20lack%20of%20consensus%20on%20optimal%20frequency%20limits.%20This%20study%20aims%20to%20address%20this%20challenge%20by%20exploring%20pupillography%20frequency%20bands%20and%20seeking%20clarity%20in%20defining%20the%20most%20effective%20ranges%20for%20cognitive%20load%20assessment.%20From%20a%20controlled%20experiment%20involving%2021%20programmers%20performing%20software%20bug%20inspection%2C%20our%20study%20pinpoints%20the%20optimal%20low-frequency%20%280.06-0.29%20Hz%29%20and%20high-frequency%20%280.29-0.49%20Hz%29%20bands.%20Correlation%20analysis%20yielded%20a%20geometric%20mean%20of%200.238%20compared%20to%20Heart%20Rate%20Variability%20features%2C%20with%20individual%20correlations%20for%20low-frequency%2C%20high-frequency%2C%20and%20their%20ratio%20at%200.279%2C%200.168%2C%20and%200.286%2C%20respectively.%20Extending%20the%20study%20to%2051%20participants%2C%20including%20a%20different%20experiment%20focusing%20on%20mental%20arithmetic%20tasks%2C%20validated%20the%20previous%20findings%20and%20further%20refined%20bands%2C%20maintaining%20effectiveness%20with%20a%20geometric%20mean%20correlation%20of%200.236%20and%20surpassing%20common%20frequency%20bands%20reported%20in%20the%20existing%20literature.%20This%20study%20represents%20a%20pivotal%20step%20toward%20converging%20and%20establishing%20a%20coherent%20framework%20for%20frequency%20band%20definition%20to%20be%20used%20in%20pupillography%20analysis.%20Furthermore%2C%20based%20on%20this%2C%20it%20also%20contributes%20insights%20into%20the%20importance%20of%20more%20integration%20and%20adoption%20of%20eye-tracking%20with%20pupillography%20technology%20into%20authentic%20software%20development%20contexts%20for%20cognitive%20load%20assessment%20at%20a%20very%20fine%20level%20of%20granularity.%22%2C%22date%22%3A%222025-01-27%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1038%5C%2Fs41598-025-85663-2%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fwww.nature.com%5C%2Farticles%5C%2Fs41598-025-85663-2%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%222045-2322%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22KJC2CY2Q%22%5D%2C%22dateModified%22%3A%222025-01-28T04%3A09%3A21Z%22%7D%7D%2C%7B%22key%22%3A%22CWKT3X76%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Kumar%20et%20al.%22%2C%22parsedDate%22%3A%222025-01-22%22%2C%22numChildren%22%3A3%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BKumar%2C%20A.%2C%20Prol%2C%20D.%2C%20Alipour%2C%20A.%2C%20%26amp%3B%20Ragavan%2C%20S.%20S.%20%282025%29.%20%26lt%3Bi%26gt%3BWeb%20vs.%20LLMs%3A%20An%20Empirical%20Study%20of%20Learning%20Behaviors%20of%20CS2%20Students%26lt%3B%5C%2Fi%26gt%3B%20%28arXiv%3A2501.11935%29.%20arXiv.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2501.11935%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.48550%5C%2FarXiv.2501.11935%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DCWKT3X76%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22preprint%22%2C%22title%22%3A%22Web%20vs.%20LLMs%3A%20An%20Empirical%20Study%20of%20Learning%20Behaviors%20of%20CS2%20Students%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Aayush%22%2C%22lastName%22%3A%22Kumar%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Daniel%22%2C%22lastName%22%3A%22Prol%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Amin%22%2C%22lastName%22%3A%22Alipour%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Sruti%20Srinivasa%22%2C%22lastName%22%3A%22Ragavan%22%7D%5D%2C%22abstractNote%22%3A%22LLMs%20such%20as%20ChatGPT%20have%20been%20widely%20adopted%20by%20students%20in%20higher%20education%20as%20tools%20for%20learning%20programming%20and%20related%20concepts.%20However%2C%20it%20remains%20unclear%20how%20effective%20students%20are%20and%20what%20strategies%20students%20use%20while%20learning%20with%20LLMs.%20Since%20the%20majority%20of%20students%26%23039%3B%20experiences%20in%20online%20self-learning%20have%20come%20through%20using%20search%20engines%20such%20as%20Google%2C%20evaluating%20AI%20tools%20in%20this%20context%20can%20help%20us%20address%20these%20gaps.%20In%20this%20mixed%20methods%20research%2C%20we%20conducted%20an%20exploratory%20within-subjects%20study%20to%20understand%20how%20CS2%20students%20learn%20programming%20concepts%20using%20both%20LLMs%20as%20well%20as%20traditional%20online%20methods%20such%20as%20educational%20websites%20and%20videos%20to%20examine%20how%20students%20approach%20learning%20within%20and%20across%20both%20scenarios.%20We%20discovered%20that%20students%20found%20it%20easier%20to%20learn%20a%20more%20difficult%20concept%20using%20traditional%20methods%20than%20using%20ChatGPT.%20We%20also%20found%20that%20students%20ask%20fewer%20follow-ups%20and%20use%20more%20keyword-based%20queries%20for%20search%20engines%20while%20their%20prompts%20to%20LLMs%20tend%20to%20explicitly%20ask%20for%20information.%22%2C%22genre%22%3A%22%22%2C%22repository%22%3A%22arXiv%22%2C%22archiveID%22%3A%22arXiv%3A2501.11935%22%2C%22date%22%3A%222025-01-22%22%2C%22DOI%22%3A%2210.48550%5C%2FarXiv.2501.11935%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2501.11935%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22SMKPANXH%22%5D%2C%22dateModified%22%3A%222025-01-28T03%3A23%3A10Z%22%7D%7D%2C%7B%22key%22%3A%22SPZ65DKJ%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22creatorSummary%22%3A%22Srinivasan%20et%20al.%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A2%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BSrinivasan%2C%20A.%2C%20Ellemose%2C%20J.%2C%20Butcher%2C%20P.%20W.%20S.%2C%20Ritsos%2C%20P.%20D.%2C%20%26amp%3B%20Elmqvist%2C%20N.%20%282025%29.%20Attention-Aware%20Visualization%3A%20Tracking%20and%20Responding%20to%20User%20Perception%20Over%20Time.%20%26lt%3Bi%26gt%3BIEEE%20Transactions%20on%20Visualization%20and%20Computer%20Graphics%26lt%3B%5C%2Fi%26gt%3B%2C%20%26lt%3Bi%26gt%3B31%26lt%3B%5C%2Fi%26gt%3B%281%29%2C%201017%26%23x2013%3B1027.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FTVCG.2024.3456300%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1109%5C%2FTVCG.2024.3456300%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DSPZ65DKJ%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22journalArticle%22%2C%22title%22%3A%22Attention-Aware%20Visualization%3A%20Tracking%20and%20Responding%20to%20User%20Perception%20Over%20Time%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Arvind%22%2C%22lastName%22%3A%22Srinivasan%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Johannes%22%2C%22lastName%22%3A%22Ellemose%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Peter%20W.%20S.%22%2C%22lastName%22%3A%22Butcher%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Panagiotis%20D.%22%2C%22lastName%22%3A%22Ritsos%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Niklas%22%2C%22lastName%22%3A%22Elmqvist%22%7D%5D%2C%22abstractNote%22%3A%22We%20propose%20the%20notion%20of%20Attention-Aware%20Visualizations%20%28AAVs%29%20that%20track%20the%20user%26%23039%3Bs%20perception%20of%20a%20visual%20representation%20over%20time%20and%20feed%20this%20information%20back%20to%20the%20visualization.%20Such%20context%20awareness%20is%20particularly%20useful%20for%20ubiquitous%20and%20immersive%20analytics%20where%20knowing%20which%20embedded%20visualizations%20the%20user%20is%20looking%20at%20can%20be%20used%20to%20make%20visualizations%20react%20appropriately%20to%20the%20user%26%23039%3Bs%20attention%3A%20for%20example%2C%20by%20highlighting%20data%20the%20user%20has%20not%20yet%20seen.%20We%20can%20separate%20the%20approach%20into%20three%20components%3A%20%281%29%20measuring%20the%20user%26%23039%3Bs%20gaze%20on%20a%20visualization%20and%20its%20parts%3B%20%282%29%20tracking%20the%20user%26%23039%3Bs%20attention%20over%20time%3B%20and%20%283%29%20reactively%20modifying%20the%20visual%20representation%20based%20on%20the%20current%20attention%20metric.%20In%20this%20paper%2C%20we%20present%20two%20separate%20implementations%20of%20AAV%3A%20a%202D%20data-agnostic%20method%20for%20web-based%20visualizations%20that%20can%20use%20an%20embodied%20eyetracker%20to%20capture%20the%20user%26%23039%3Bs%20gaze%2C%20and%20a%203D%20data-aware%20one%20that%20uses%20the%20stencil%20buffer%20to%20track%20the%20visibility%20of%20each%20individual%20mark%20in%20a%20visualization.%20Both%20methods%20provide%20similar%20mechanisms%20for%20accumulating%20attention%20over%20time%20and%20changing%20the%20appearance%20of%20marks%20in%20response.%20We%20also%20present%20results%20from%20a%20qualitative%20evaluation%20studying%20visual%20feedback%20and%20triggering%20mechanisms%20for%20capturing%20and%20revisualizing%20attention.%22%2C%22date%22%3A%221%5C%2F2025%22%2C%22section%22%3A%22%22%2C%22partNumber%22%3A%22%22%2C%22partTitle%22%3A%22%22%2C%22DOI%22%3A%2210.1109%5C%2FTVCG.2024.3456300%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22http%3A%5C%2F%5C%2Farxiv.org%5C%2Fabs%5C%2F2404.10732%22%2C%22PMID%22%3A%22%22%2C%22PMCID%22%3A%22%22%2C%22ISSN%22%3A%221077-2626%2C%201941-0506%2C%202160-9306%22%2C%22language%22%3A%22%22%2C%22collections%22%3A%5B%22AD3FF8M3%22%5D%2C%22dateModified%22%3A%222025-02-21T18%3A09%3A42Z%22%7D%7D%2C%7B%22key%22%3A%22SVPEGQGG%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A11531701%2C%22username%22%3A%22Dawn202209%22%2C%22name%22%3A%22%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fdawn202209%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Aliannejadi%20et%20al.%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BAliannejadi%2C%20M.%2C%20Gwizdka%2C%20J.%2C%20%26amp%3B%20Zamani%2C%20H.%20%282025%29.%20Interactions%20with%20Generative%20Information%20Retrieval%20Systems.%20In%20R.%20W.%20White%20%26amp%3B%20C.%20Shah%20%28Eds.%29%2C%20%26lt%3Bi%26gt%3BInformation%20Access%20in%20the%20Era%20of%20Generative%20AI%26lt%3B%5C%2Fi%26gt%3B%20%28pp.%2047%26%23x2013%3B71%29.%20Springer%20Nature%20Switzerland.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-031-73147-1_3%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-031-73147-1_3%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DSVPEGQGG%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22bookSection%22%2C%22title%22%3A%22Interactions%20with%20Generative%20Information%20Retrieval%20Systems%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mohammad%22%2C%22lastName%22%3A%22Aliannejadi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jacek%22%2C%22lastName%22%3A%22Gwizdka%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hamed%22%2C%22lastName%22%3A%22Zamani%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Ryen%20W.%22%2C%22lastName%22%3A%22White%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Chirag%22%2C%22lastName%22%3A%22Shah%22%7D%5D%2C%22abstractNote%22%3A%22Recent%20advancements%20in%20generative%20artificial%20intelligence%20have%20provided%20unique%20opportunities%20for%20seamless%20information%20access%20and%20discovery%2C%20particularly%20through%20natural%20language%20interactions.%20These%20technologies%20enable%20users%20to%20easily%20describe%20their%20needs%20and%20provide%20interactive%20feedback.%20This%20chapter%20provides%20an%20overview%20of%20the%20opportunities%20and%20challenges%20in%20interacting%20with%20information%20access%20systems%20powered%20by%20generative%20artificial%20intelligence%20technologies.%20We%20focus%20on%20user%20interfaces%20in%20these%20systems%20and%20various%20interactions%20for%20describing%20and%20clarifying%20users%5Cu2019%20needs%2C%20refining%20the%20result%20list%20produced%20by%20the%20system%2C%20providing%20proactive%20feedback%20to%20the%20system%2C%20the%20system%20proactively%20initiating%20conversations%2C%20explaining%20the%20result%20list%2C%20and%20enabling%20multi-modal%20interactions%20for%20information%20access.%22%2C%22bookTitle%22%3A%22Information%20Access%20in%20the%20Era%20of%20Generative%20AI%22%2C%22date%22%3A%222025%22%2C%22originalDate%22%3A%22%22%2C%22originalPublisher%22%3A%22%22%2C%22originalPlace%22%3A%22%22%2C%22format%22%3A%22%22%2C%22ISBN%22%3A%22978-3-031-73147-1%22%2C%22DOI%22%3A%2210.1007%5C%2F978-3-031-73147-1_3%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-031-73147-1_3%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22IC7YILZX%22%5D%2C%22dateModified%22%3A%222025-11-24T22%3A43%3A12Z%22%7D%7D%2C%7B%22key%22%3A%22RY2KVR6E%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A11531701%2C%22username%22%3A%22Dawn202209%22%2C%22name%22%3A%22%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fdawn202209%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Shi%20and%20Gwizdka%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BShi%2C%20L.%2C%20%26amp%3B%20Gwizdka%2C%20J.%20%282025%29.%20The%20Effects%20of%20Confirmation%20Bias%20and%20Readability%20on%20Relevance%20Assessment%3A%20An%20Eye-Tracking%20Study.%20%26lt%3Bi%26gt%3BInformation%20Systems%20and%20Neuroscience%26lt%3B%5C%2Fi%26gt%3B%2C%20137%26%23x2013%3B146.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-031-71385-9_11%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-031-71385-9_11%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DRY2KVR6E%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22The%20Effects%20of%20Confirmation%20Bias%20and%20Readability%20on%20Relevance%20Assessment%3A%20An%20Eye-Tracking%20Study%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Li%22%2C%22lastName%22%3A%22Shi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jacek%22%2C%22lastName%22%3A%22Gwizdka%22%7D%5D%2C%22abstractNote%22%3A%22This%20ongoing%20study%20is%20investigating%20the%20effects%20of%20confirmation%20bias%20and%20document%20readability%20on%20user%20document%20relevance%20judgments%20in%20interactive%20information%20systems.%20Preliminary%20results%20from%20a%20within-subjects%20eye-tracking%20experiment%20suggest%20a%20significant%20interaction...%22%2C%22proceedingsTitle%22%3A%22Information%20Systems%20and%20Neuroscience%22%2C%22conferenceName%22%3A%22NeuroIS%20Retreat%22%2C%22date%22%3A%222025%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1007%5C%2F978-3-031-71385-9_11%22%2C%22ISBN%22%3A%22978-3-031-71385-9%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Flink.springer.com%5C%2Fchapter%5C%2F10.1007%5C%2F978-3-031-71385-9_11%22%2C%22ISSN%22%3A%222195-4976%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22IC7YILZX%22%5D%2C%22dateModified%22%3A%222025-11-24T22%3A44%3A31Z%22%7D%7D%2C%7B%22key%22%3A%22UH89TQAT%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A11531701%2C%22username%22%3A%22Dawn202209%22%2C%22name%22%3A%22%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fdawn202209%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Gwizdka%20et%20al.%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A1%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BGwizdka%2C%20J.%2C%20Mostafa%2C%20J.%2C%20Moshfeghi%2C%20Y.%2C%20%26amp%3B%20vom%20Brocke%2C%20J.%20%282025%29.%20Neurophysiological%20Approaches%20for%20Understanding%20Information%20Seeking%20Behavior%3A%20A%20NeuroIS%202024%20Panel.%20%26lt%3Bi%26gt%3BInformation%20Systems%20and%20Neuroscience%26lt%3B%5C%2Fi%26gt%3B%2C%20397%26%23x2013%3B401.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-DOIURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-031-71385-9_35%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-031-71385-9_35%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3DUH89TQAT%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22conferencePaper%22%2C%22title%22%3A%22Neurophysiological%20Approaches%20for%20Understanding%20Information%20Seeking%20Behavior%3A%20A%20NeuroIS%202024%20Panel%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jacek%22%2C%22lastName%22%3A%22Gwizdka%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Javed%22%2C%22lastName%22%3A%22Mostafa%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Yashar%22%2C%22lastName%22%3A%22Moshfeghi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jan%22%2C%22lastName%22%3A%22vom%20Brocke%22%7D%5D%2C%22abstractNote%22%3A%22We%20propose%20a%20panel%20on%20neurophysiological%20approaches%20for%20understanding%20information%20seeking%20behavior.%22%2C%22proceedingsTitle%22%3A%22Information%20Systems%20and%20Neuroscience%22%2C%22conferenceName%22%3A%22NeuroIS%20Retreat%22%2C%22date%22%3A%222025%22%2C%22eventPlace%22%3A%22%22%2C%22DOI%22%3A%2210.1007%5C%2F978-3-031-71385-9_35%22%2C%22ISBN%22%3A%22978-3-031-71385-9%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Flink.springer.com%5C%2Fchapter%5C%2F10.1007%5C%2F978-3-031-71385-9_35%22%2C%22ISSN%22%3A%222195-4976%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%22IC7YILZX%22%5D%2C%22dateModified%22%3A%222025-11-24T22%3A43%3A19Z%22%7D%7D%2C%7B%22key%22%3A%226VU9PBG6%22%2C%22library%22%3A%7B%22id%22%3A1679373%7D%2C%22meta%22%3A%7B%22lastModifiedByUser%22%3A%7B%22id%22%3A11531701%2C%22username%22%3A%22Dawn202209%22%2C%22name%22%3A%22%22%2C%22links%22%3A%7B%22alternate%22%3A%7B%22href%22%3A%22https%3A%5C%2F%5C%2Fwww.zotero.org%5C%2Fdawn202209%22%2C%22type%22%3A%22text%5C%2Fhtml%22%7D%7D%7D%2C%22creatorSummary%22%3A%22Aliannejadi%20et%20al.%22%2C%22parsedDate%22%3A%222025%22%2C%22numChildren%22%3A0%7D%2C%22bib%22%3A%22%26lt%3Bdiv%20class%3D%26quot%3Bcsl-bib-body%26quot%3B%20style%3D%26quot%3Bline-height%3A%202%3B%20padding-left%3A%201em%3B%20text-indent%3A-1em%3B%26quot%3B%26gt%3B%5Cn%20%20%26lt%3Bdiv%20class%3D%26quot%3Bcsl-entry%26quot%3B%26gt%3BAliannejadi%2C%20M.%2C%20Gwizdka%2C%20J.%2C%20%26amp%3B%20Zamani%2C%20H.%20%282025%29.%20Interactions%20with%20Generative%20Information%20Retrieval%20Systems.%20In%20R.%20W.%20White%20%26amp%3B%20C.%20Shah%20%28Eds.%29%2C%20%26lt%3Bi%26gt%3BInformation%20Access%20in%20the%20Era%20of%20Generative%20AI%26lt%3B%5C%2Fi%26gt%3B%20%28pp.%2047%26%23x2013%3B71%29.%20Springer%20Nature%20Switzerland.%20%26lt%3Ba%20class%3D%26%23039%3Bzp-ItemURL%26%23039%3B%20target%3D%26%23039%3B_blank%26%23039%3B%20href%3D%26%23039%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-031-73147-1_3%26%23039%3B%26gt%3Bhttps%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-031-73147-1_3%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3Ba%20title%3D%26%23039%3BCite%20in%20RIS%20Format%26%23039%3B%20class%3D%26%23039%3Bzp-CiteRIS%26%23039%3B%20data-zp-cite%3D%26%23039%3Bapi_user_id%3D1679373%26amp%3Bitem_key%3D6VU9PBG6%26%23039%3B%20href%3D%26%23039%3Bjavascript%3Avoid%280%29%3B%26%23039%3B%26gt%3BCite%26lt%3B%5C%2Fa%26gt%3B%20%26lt%3B%5C%2Fdiv%26gt%3B%5Cn%26lt%3B%5C%2Fdiv%26gt%3B%22%2C%22data%22%3A%7B%22itemType%22%3A%22bookSection%22%2C%22title%22%3A%22Interactions%20with%20Generative%20Information%20Retrieval%20Systems%22%2C%22creators%22%3A%5B%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Mohammad%22%2C%22lastName%22%3A%22Aliannejadi%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Jacek%22%2C%22lastName%22%3A%22Gwizdka%22%7D%2C%7B%22creatorType%22%3A%22author%22%2C%22firstName%22%3A%22Hamed%22%2C%22lastName%22%3A%22Zamani%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Ryen%20W.%22%2C%22lastName%22%3A%22White%22%7D%2C%7B%22creatorType%22%3A%22editor%22%2C%22firstName%22%3A%22Chirag%22%2C%22lastName%22%3A%22Shah%22%7D%5D%2C%22abstractNote%22%3A%22Recent%20advancements%20in%20generative%20artificial%20intelligence%20have%20provided%20unique%20opportunities%20for%20seamless%20information%20access%20and%20discovery%2C%20particularly%20through%20natural%20language%20interactions.%20These%20technologies%20enable%20users%20to%20easily%20describe%20their%20needs%20and%20provide%20interactive%20feedback.%20This%20chapter%20provides%20an%20overview%20of%20the%20opportunities%20and%20challenges%20in%20interacting%20with%20information%20access%20systems%20powered%20by%20generative%20artificial%20intelligence%20technologies.%20We%20focus%20on%20user%20interfaces%20in%20these%20systems%20and%20various%20interactions%20for%20describing%20and%20clarifying%20users%5Cu2019%20needs%2C%20refining%20the%20result%20list%20produced%20by%20the%20system%2C%20providing%20proactive%20feedback%20to%20the%20system%2C%20the%20system%20proactively%20initiating%20conversations%2C%20explaining%20the%20result%20list%2C%20and%20enabling%20multi-modal%20interactions%20for%20information%20access.%22%2C%22bookTitle%22%3A%22Information%20Access%20in%20the%20Era%20of%20Generative%20AI%22%2C%22date%22%3A%222025%22%2C%22originalDate%22%3A%22%22%2C%22originalPublisher%22%3A%22%22%2C%22originalPlace%22%3A%22%22%2C%22format%22%3A%22%22%2C%22ISBN%22%3A%22978-3-031-73147-1%22%2C%22DOI%22%3A%2210.1007%5C%2F978-3-031-73147-1_3%22%2C%22citationKey%22%3A%22%22%2C%22url%22%3A%22https%3A%5C%2F%5C%2Fdoi.org%5C%2F10.1007%5C%2F978-3-031-73147-1_3%22%2C%22ISSN%22%3A%22%22%2C%22language%22%3A%22en%22%2C%22collections%22%3A%5B%228384K4BB%22%5D%2C%22dateModified%22%3A%222025-02-10T21%3A48%3A27Z%22%7D%7D%5D%7D
Arranz-Romero, J., Roig-Vila, R., & Cazorla, M. (2026). IPA 2.0: Validation of an Interpretable Emotion-Attention Index for Neuro-Adaptive Learning with AI. Applied Sciences, 16(5). https://doi.org/10.3390/app16052515 Cite
Jayawardena, G., Jayawardana, Y., & Gwizdka, J. (2025). Measuring Mental Effort in Real Time Using Pupillometry. Journal of Eye Movement Research, 18(6), 70. https://doi.org/10.3390/jemr18060070 Cite
Senova, S., Palfi, S., Cretenoud, A., Begue, J., Labiod, M. A., Mellouk, A., Wolkenstein, P., Dauguet, J., & Mainar, P. (2025). Real-time and personalized dry EEG neurofeedback increased students’ attention during online teaching in everyday life conditions. Neurocomputing, 132045. https://doi.org/10.1016/j.neucom.2025.132045 Cite
Spina, D., Gwizdka, J., Ji, K., Moshfeghi, Y., Mostafa, J., Ruotsalo, T., Zhang, M., Ahmad, A., Lawati, S. F. D. A., Boonprakong, N., Fernando, N., He, J., Hoeber, O., Jayawardena, G., Lee, B.-G., Liu, H., Pike, M., Pirmoradi, A., Nakisa, B., … Wilson, M. L. (2025). Report on the 3rd Workshop on NeuroPhysiological Approaches for Interactive Information Retrieval (NeuroPhysIIR 2025) at SIGIR CHIIR 2025. SIGIR Forum, 59(1), 1–43. https://doi.org/10.1145/3769733.3769740 Cite
Wei, L., Yu, Y., Qin, Y., & Zhang, S. (2025). A Survey of EEG-Based Approaches to Classroom Attention Assessment in Education. Information, 16(10), 860. (188954766). https://doi.org/10.3390/info16100860 Cite
Liu, H., Gwizdka, J., & Lease, M. (2025). Exploring Multidimensional Checkworthiness: Designing AI-assisted Claim Prioritization for Human Fact-checkers. CSCW’2025, Proc. ACM Hum.-Comput. Interact., 9(7). https://doi.org/10.1145/3757473 Cite
Dang, Q., Kucukosmanoglu, M., Anoruo, M., Kargosha, G., Conklin, S., & Brooks, J. (2025). Automatic detection of cognitive events using machine learning and understanding models’ interpretations of human cognition. Scientific Reports, 15(1), 30506. https://doi.org/10.1038/s41598-025-16165-4 Cite
Latifzadeh, K., Gwizdka, J., & Leiva, L. A. (2025). A Versatile Dataset of Mouse and Eye Movements on Search Engine Results Pages. Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’25, 3412–3421. https://doi.org/10.1145/3726302.3730325 Cite
Silvan, A., Parra, L. C., & Madsen, J. (2025). Real-Time Estimation of Overt Attention from Dynamic Features of the Face Using Deep Learning. 2025 47th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 1–5. https://doi.org/10.1109/EMBC58623.2025.11254013 Cite
Jayawardena, G., Jayawardana, Y., Abeysinghe, Y., Mahanama, B., Jayarathna, S., & Gwizdka, J. (2025). A Real-Time Approach to Capture Ambient and Focal Attention in Visual Search. Proceedings of the 2025 Symposium on Eye Tracking Research and Applications, ETRA ’25, 1–7. https://doi.org/10.1145/3715669.3723111 Cite
Gollan, B., & Raggam, P. (2025). Beyond Gaze: Quantifying Conscious Perception Through an Innovative Eye Tracking Biomarker. Proc. ACM Hum.-Comput. Interact., 9(3), ETRA06:1-ETRA06:17. https://doi.org/10.1145/3725831 Cite
Kurzom, N., Misherky, J., & Mendelsohn, A. (2025). The Effect of Background Music on Memory Formation of Spoken Words: A Tradeoff Between Tension Perception and Memory. Music Perception, 1–16. https://doi.org/10.1525/mp.2025.2449567 Cite
Gwizdka, J., & Cole, M. (2025). g-Rel-READER: A Dataset for Relevance and Reading Evaluation through Advanced Data from Eye-tracking and EEG Recordings. Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval, CHIIR ’25, 377–381. https://doi.org/10.1145/3698204.3716474 Cite
Gwizdka, J., Mostafa, J., Zhang, M., Ji, K., Moshfeghi, Y., Ruotsalo, T., & Spina, D. (2025). NeuroPhysIIR: International Workshop on NeuroPhysiological Approaches for Interactive Information Retrieval. Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval, CHIIR ’25, 413–415. https://doi.org/10.1145/3698204.3716481 Cite
Chavula, C., & Kist, C. (2025). Fitting to the body: The role of embodiment in beauty information seeking. Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval, CHIIR ’25, 140–153. https://doi.org/10.1145/3698204.3716452 Cite
Le, D. D., Kieu, H. D., Le, T. H., & Ngo, T. D. (2025). Attention detection: an EEG and eye tracking features fusion approach in eye-based interaction systems. Neural Computing and Applications. https://doi.org/10.1007/s00521-025-11195-5 Cite
Lopez-Cardona, A., Emami, P., Idesis, S., Duraisamy, S., Leiva, L. A., & Arapakis, I. (2025). A Comparative Study of Scanpath Models in Graph-Based Visualization. https://doi.org/0.1145/3715669.3725882 Cite
Mishra, A., Shukla, S., Torres, J., Gwizdka, J., & Roychowdhury, S. (2025). Thought2Text: Text Generation from EEG Signal using Large Language Models (LLMs). In L. Chiruzzo, A. Ritter, & L. Wang (Eds.), Findings of the Association for Computational Linguistics: NAACL 2025 (pp. 3747–3759). Association for Computational Linguistics. https://aclanthology.org/2025.findings-naacl.207/ Cite
Pirmoradi, A., Hoeber, O., Harvey, M., Momeni, M., & Gleeson, D. (2025). Integrating Eye Tracking, Feature Use, and Emotional Valence: A Multimodal Approach to Evaluating Search Interfaces. Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval, 23–41. https://doi.org/10.1145/3698204.3716444 Cite
Vizgirda, V., McNeill, F., & Robertson, J. (2025). Teacher Online Educational Resource Search in Education System Context. Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval, 81–99. https://doi.org/10.1145/3698204.3716448 Cite
Bogers, T., Kaya, M., & Gäde, M. (2025). From Queries to Candidates: Exploring Search and Source Interaction Behavior of Recruiters. Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval, 112–128. https://doi.org/10.1145/3698204.3716450 Cite
Zerhoudi, S., & Granitzer, M. (2025). SearchLab: Exploring Conversational and Traditional Search Interfaces in Information Retrieval. Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval, 382–389. https://doi.org/10.1145/3698204.3716475 Cite
Gibson, R. C., Meiklem, R., Moncur, W., & Ruthven, I. (2025). Online Information Disclosure and Information Privacy Practices During Significant Life Transitions: A Scoping Review. Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval, 42–56. https://doi.org/10.1145/3698204.3716445 Cite
Emter, F., & Chavula, C. (2025). Seeking Control: How Women Evaluate and Use Menopause Related Information. Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval, 179–194. https://doi.org/10.1145/3698204.3716455 Cite
Zhang, J., Xie, Y., Ma, J., Zheng, Z., Zhai, S., & Wang, P. (2025). “I’m Looking for a Book with a Big Watermelon and Many Small Ants”: An Exploratory Study on a Visualized Search System for Preschoolers’ Picture Book Search. Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval, 342–347. https://doi.org/10.1145/3698204.3716469 Cite
Van Der Sluis, F., & Azzopardi, L. (2025). Search Changes Consumers’ Minds: How Recognizing Gaps Drives Sustainable Choices: How Recognizing Gaps Drives Sustainable Choices. Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval, 195–207. https://doi.org/10.1145/3698204.3716456 Cite
Bahl, R., Chang, S., McKay, D., Buchanan, G., & Cheong, M. (2025). A Whole New World: Migrant Journeys Through Digital Information Landscapes. Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval, 247–262. https://doi.org/10.1145/3698204.3716460 Cite
Siro, C., Abbasiantaeb, Z., Yuan, Y., Aliannejadi, M., & De Rijke, M. (2025). Do Images Clarify? A Study on the Effect of Images on Clarifying Questions in Conversational Search. Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval, 273–291. https://doi.org/10.1145/3698204.3716464 Cite
Mayerhofer, K., Capra, R., & Elsweiler, D. (2025). Blending Queries and Conversations: Understanding Trust, Verification, and System Choice in Search and Chat Interactions. Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval, 168–178. https://doi.org/10.1145/3698204.3716454 Cite
Bogers, T., Gäde, M., Hall, M., Koolen, M., Petras, V., & Skov, M. (2025). Exploring the Zero-Shot Known-Item Retrieval Capabilities of LLMs for Casual Leisure Information Needs. Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval, 316–325. https://doi.org/10.1145/3698204.3716466 Cite
Sun, R., Kong, R., Milton, A., Kluver, D., Paterson, I., & Konstan, J. A. (2025). Why They Come And Go: A Case Study of Productive Flyby Users and Their Rating Integrity Challenge in Movie Recommenders. Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval, 1–11. https://doi.org/10.1145/3698204.3716442 Cite
Yang, Y., Capra, R., & Guo, M. (2025). Beyond the Surface: Investigating Explicit and Implicit Perceptions of Music Diversity. Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval, 310–315. https://doi.org/10.1145/3698204.3716465 Cite
Orin, K., & Hoeber, O. (2025). Organizing Found Information in Public Digital Library Search. Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval, 358–366. https://doi.org/10.1145/3698204.3716471 Cite
Liu, J., & He, J. (2025). Boundedly Rational Searchers Interacting with Medical Misinformation: Characterizing Context-Dependent Decoy Effects on Credibility and Usefulness Evaluation in Sessions. Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval, 154–167. https://doi.org/10.1145/3698204.3716453 Cite
Choi, B., & Arguello, J. (2025). The Effects of Working Memory during a Search and Sensemaking Task. Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval, 100–111. https://doi.org/10.1145/3698204.3716449 Cite
Liu, H., Zhao, S., Wang, S., Hansen, P., Oakley, I., & Le, K.-D. (2025). Designing Interactive Multimodal Information Retrieval and Access for Heads Up Computing (DIMIRA-HUC). Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval, 416–418. https://doi.org/10.1145/3698204.3716482 Cite
Hoeber, O. (2025). Design Principles for Exploratory Search Interfaces. Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval, 12–22. https://doi.org/10.1145/3698204.3716443 Cite
Pirmoradi, A., Hoeber, O., Harvey, M., Momeni, M., & Gleeson, D. (2025). Integrating Eye Tracking, Feature Use, and Emotional Valence: A Multimodal Approach to Evaluating Search Interfaces. Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval, 23–41. https://doi.org/10.1145/3698204.3716444 Cite
Azzopardi, L., Nicol, E., Briggs, J., Moncur, W., Schafer, B., Nash, C., & Duheric, M. (2025). Assessing Risks in Online Information Sharing. Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval, 71–80. https://doi.org/10.1145/3698204.3716447 Cite
Nasser, H., Da Costa Pereira, C., Escazut, C., & Tettamanzi, A. (2025). Personalized Knowledge Gain Estimation Through Query-Driven Learning Goal Inference in Search As Learning. Proceedings of the 2025 ACM SIGIR Conference on Human Information Interaction and Retrieval, 263–272. https://doi.org/10.1145/3698204.3716463 Cite
de la Mora Velasco, E., & Moreno, M. (2025). Music and online learning: new perspectives and directions. Educational Technology Research and Development. https://doi.org/10.1007/s11423-025-10491-0 Cite
Shukla, S., Torres, J., Mishra, A., Gwizdka, J., & Roychowdhury, S. (2025, February 17). A Survey on Bridging EEG Signals and Generative AI: From Image and Text to Beyond. arXiv.Org. https://arxiv.org/abs/2502.12048v1 Cite
Ekin, M., Krejtz, K., Duarte, C., Duchowski, A. T., & Krejtz, I. (2025). Prediction of intrinsic and extraneous cognitive load with oculometric and biometric indicators. Scientific Reports, 15(1), 5213. https://doi.org/10.1038/s41598-025-89336-y Cite
Medeiros, J., Bernardes, A., Couceiro, R., Oliveira, P., Madeira, H., Teixeira, C., & Carvalho, P. (2025). Optimal frequency bands for pupillography for maximal correlation with HRV. Scientific Reports, 15(1), 3361. https://doi.org/10.1038/s41598-025-85663-2 Cite
Kumar, A., Prol, D., Alipour, A., & Ragavan, S. S. (2025). Web vs. LLMs: An Empirical Study of Learning Behaviors of CS2 Students (arXiv:2501.11935). arXiv. https://doi.org/10.48550/arXiv.2501.11935 Cite
Srinivasan, A., Ellemose, J., Butcher, P. W. S., Ritsos, P. D., & Elmqvist, N. (2025). Attention-Aware Visualization: Tracking and Responding to User Perception Over Time. IEEE Transactions on Visualization and Computer Graphics, 31(1), 1017–1027. https://doi.org/10.1109/TVCG.2024.3456300 Cite
Aliannejadi, M., Gwizdka, J., & Zamani, H. (2025). Interactions with Generative Information Retrieval Systems. In R. W. White & C. Shah (Eds.), Information Access in the Era of Generative AI (pp. 47–71). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-73147-1_3 Cite
Shi, L., & Gwizdka, J. (2025). The Effects of Confirmation Bias and Readability on Relevance Assessment: An Eye-Tracking Study. Information Systems and Neuroscience, 137–146. https://doi.org/10.1007/978-3-031-71385-9_11 Cite
Gwizdka, J., Mostafa, J., Moshfeghi, Y., & vom Brocke, J. (2025). Neurophysiological Approaches for Understanding Information Seeking Behavior: A NeuroIS 2024 Panel. Information Systems and Neuroscience, 397–401. https://doi.org/10.1007/978-3-031-71385-9_35 Cite
Aliannejadi, M., Gwizdka, J., & Zamani, H. (2025). Interactions with Generative Information Retrieval Systems. In R. W. White & C. Shah (Eds.), Information Access in the Era of Generative AI (pp. 47–71). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-73147-1_3 Cite