Call for Papers
Submission Deadline | July 1, 2025 (AoE) |
Submission Deadline for ISWC Redirect Papers | July 7, 2025 (AoE) |
Acceptance Notification | July 9, 2025 |
Camera ready deadline | July 26, 2025 |
Submission website | https://new.precisionconference.com/submissions (Society: SIGCHI → Conference Ubicomp/ISWC 2025 → Track: Ubicomp/ISWC 2025 Workshop: EvalComp) |
Camera ready instructions | Papers submitted for review should be 4-6 pages long (excluding references) in the double-column format. Please follow UbiComp's publication vendor instructions to prepare your manuscript. |
Workshop Theme and Goals
We aim EvalComp to be an interdisciplinary forum beyond publications’ solicitation that brings together academia and industry. In particular, the goal of this workshop is to collaboratively: Assess the evolving socio-technical challenges and concerns around evaluating foundation models and generative AI in ubiquitous technologies, including domains such as health sensing, mobility, context-aware interaction, and behavioral computing; Map the landscape of evaluation risks and opportunities—spanning input modalities, learning paradigms, deployment settings, and cultural contexts; Envision new evaluation paradigms that reflect real-world performance, fairness, robustness, and alignment across diverse and multimodal data; Explore novel methodologies for benchmarking, auditing, and human-in-the-loop evaluation, with a focus on generalization, uncertainty, and emergent behavior; Start an interdisciplinary discourse on what meaningful and responsible evaluation looks like in the age of foundation models; and finally, Consolidate a global community of researchers and practitioners committed to shaping responsible evaluation practices through shared infrastructure, collaborative studies, and future funding opportunities.
Topics of Interest: EvalComp aims to bring together researchers and practitioners from academia, industry, and civil society to rethink how we comprehensively evaluate all aspects of foundation models, including LLMs and GenAI, especially as they are integrated into ubiquitous computing contexts. We invite submissions across a range of formats—technical papers, work-in-progress, evaluations-in-the-wild, toolkits, benchmarks, position papers, and provocations. Submissions are encouraged but not limited to the following topics:
- Novel evaluation frameworks, metrics, and protocols for foundation models and GenAI beyond conventional metrics (e.g., coherence, alignment, fairness, robustness, uncertainty)
- Evaluation of LLMs and generative systems in ubiquitous and multimodal environments (e.g., mobile, wearable, ambient computing, sensor fusion)
- Generalization and robustness under distribution shifts, personalization, and adversarial or real-world conditions
- Evaluation of emergent behaviors and unintended capabilities (e.g., hallucinations, misuse, model drift, over-reliance)
- Comparative evaluation across foundation models (e.g., zero-shot vs. fine-tuned, proprietary vs. open-source)
- Human-in-the-loop and participatory evaluation methods (e.g., co-creation, lived experience, subjective utility, auditing)
- Context-aware evaluation datasets and testbeds for real-world domains (e.g., health, mobility, education, accessibility)
- Evaluation of alignment and value adherence (e.g., cultural sensitivity, goal consistency, instruction following)
- Social and ethical dimensions of evaluation (e.g., privacy, representational harms, misinformation, inclusivity, bias detection)
- Tools, benchmarks, and infrastructure for reproducible, scalable, and community-driven evaluation
- Cross-cultural and multilingual evaluation across diverse geographies and population groups
- Evaluation challenges in edge, low-resource, or privacy-sensitive deployments (e.g., differential privacy, on-device evaluation)
- Implications of regulatory frameworks and standards (e.g., EU AI Act, NIST AI RMF) on evaluation design and reporting
Submission details
We invite complete and ongoing research works, use cases, field studies, review, as well as position papers between 4-6 pages (excluding references). Submission should follow UbiComp's publication vendor instructions, and submitted through PCS. Specifically, the correct template for submission is double-column Word Submission Template or double-column LaTeX Template, and the correct template for publication (i.e., after conditional acceptance) is single-column Word Submission Template or double-column LaTeX template. Each article will be reviewed by 2 reviewers from a panel of experts consisting of external reviewers and organizers. To ensure accessibility, all authors should adhere to SIGCHI's Accessible Submission Guidelines. All accepted publications will be published on the workshop website and the ACM Digital Library as part of the UbiComp 2025 proceedings. At least one author of each accepted paper needs to register for the conference and the workshop itself. During the workshop, each paper will be presented in-person by one of the authors.
NEW! We also accept papers that were initially submitted to ISWC. See the new deadline for submission.
All papers need to be anonymized. Any questions should be mailed to EvalComp.workshop [AT] gmail.com.
Important Dates
- July 1, 2025 (AoE) Submission Deadline
- July 7, 2025 (AoE) Submission Deadline for ISWC Redirect Papers
- July 9, 2025 – Notification of Acceptance
- July 26, 2025 – Camera-ready
- October 14-16, 2025 – Workshop Date
Organizing Committee
Bhawana Chhaglani (UMass Amherst)Zhiyuan Wang (University of Virginia)
Lakmal Meegahapola (ETH Zurich)
Dimitris Spathis (Google, London, UK)
Marios Constantinides (CYENS Centre of Excellence, Nicosia, Cyprus)
Han Zhang (University of Washington)
Sofia Yfantidou (Aristotle University of Thessaloniki)
Niels van Berkel (Aalborg University)