if you need more than one keyword, modify and separate by underscore _
the list of search keywords can be up to 50 characters long
if you modify the keywords, press enter within the field to confirm the new search key
Tag: emily
Bibliography items where occurs: 35
- Synergizing Human-AI Agency: A Guide of 23 Heuristics for Service Co-Creation with LLM-Based Agents / 2310.15065 / ISBN:https://doi.org/10.48550/arXiv.2310.15065 / Published by ArXiv / Version released on 2023-11-29 / on (web) Publishing site
- Mapping the Ethics of Generative AI: A Comprehensive Scoping Review / 2402.08323 / ISBN:https://doi.org/10.48550/arXiv.2402.08323 / Published by ArXiv / Version released on 2024-02-13 / on (web) Publishing site
- A Survey on Human-AI Collaboration with Large Foundation Models / 2403.04931 / ISBN:https://doi.org/10.48550/arXiv.2403.04931 / Published by ArXiv / Version released on 2025-09-02 / on (web) Publishing site
- Epistemic Power in AI Ethics Labor: Legitimizing Located Complaints / 2402.08171 / ISBN:https://doi.org/10.1145/3630106.3658973 / Published by ArXiv / Version released on 2024-04-17 / on (web) Publishing site
- Modeling Emotions and Ethics with Large Language Models / 2404.13071 / ISBN:https://doi.org/10.48550/arXiv.2404.13071 / Published by ArXiv / Version released on 2024-06-25 / on (web) Publishing site
- From Model Performance to Claim: How a Change of Focus in Machine Learning Replicability Can Help Bridge the Responsibility Gap / 2404.13131 / ISBN:https://doi.org/10.1145/3630106.3658951 / Published by ArXiv / Version released on 2025-08-13 / on (web) Publishing site
- Improving governance outcomes through AI documentation: Bridging theory and practice / 2409.08960 / ISBN:https://doi.org/10.48550/arXiv.2409.08960 / Published by ArXiv / Version released on 2024-12-09 / on (web) Publishing site
- Reporting Non-Consensual Intimate Media: An Audit Study of Deepfakes / 2409.12138 / ISBN:https://doi.org/10.48550/arXiv.2409.12138 / Published by ArXiv / Version released on 2024-09-18 / on (web) Publishing site
- XTRUST: On the Multilingual Trustworthiness of Large Language Models / 2409.15762 / ISBN:https://doi.org/10.48550/arXiv.2409.15762 / Published by ArXiv / Version released on 2024-09-24 / on (web) Publishing site
- AI-Press: A Multi-Agent News Generating and Feedback Simulation System Powered by Large Language Models / 2410.07561 / ISBN:https://doi.org/10.48550/arXiv.2410.07561 / Published by ArXiv / Version released on 2024-12-12 / on (web) Publishing site
- Do LLMs Have Political Correctness? Analyzing Ethical Biases and Jailbreak Vulnerabilities in AI Systems / 2410.13334 / ISBN:https://doi.org/10.48550/arXiv.2410.13334 / Published by ArXiv / Version released on 2024-10-23 / on (web) Publishing site
- On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective / 2502.14296 / ISBN:https://doi.org/10.48550/arXiv.2502.14296 / Published by ArXiv / Version released on 2025-09-30 / on (web) Publishing site
- Towards Safer Pretraining: Analyzing and Filtering Harmful Content in Webscale datasets for Responsible LLMs / 2505.02009 / ISBN:https://doi.org/10.48550/arXiv.2505.02009 / Published by ArXiv / Version released on 2025-08-12 / on (web) Publishing site
- From Automation to Autonomy: A Survey on Large Language Models in Scientific Discovery / 2505.13259 / ISBN:https://doi.org/10.48550/arXiv.2505.13259 / Published by ArXiv / Version released on 2025-09-17 / on (web) Publishing site
- AI Literacy for Legal AI Systems: A practical approach / 2505.18006 / ISBN:https://doi.org/10.48550/arXiv.2505.18006 / Published by ArXiv / Version released on 2025-05-23 / on (web) Publishing site
- Locating Risk: Task Designers and the Challenge of Risk Disclosure in RAI Content Work / 2505.24246 / ISBN:https://doi.org/10.48550/arXiv.2505.24246 / Published by ArXiv / Version released on 2025-09-30 / on (web) Publishing site
- Ask before you Build: Rethinking AI-for-Good in Human Trafficking Interventions / 2506.22512 / ISBN:https://doi.org/10.48550/arXiv.2506.22512 / Published by ArXiv / Version released on 2025-06-26 / on (web) Publishing site
- Context, Credibility, and Control: User Reflections on AI Assisted Misinformation Tools
/ 2506.22940 / ISBN:https://doi.org/10.48550/arXiv.2506.22940 / Published by ArXiv / Version released on 2025-06-28 / on (web) Publishing site
- Towards the Digital Me: A vision of authentic Conversational Agents powered by personal Human Digital Twins
/ 2506.23826 / ISBN:https://doi.org/10.48550/arXiv.2506.23826 / Published by ArXiv / Version released on 2025-06-30 / on (web) Publishing site
- On the Surprising Efficacy of LLMs for Penetration-Testing
/ 2507.00829 / ISBN:https://doi.org/10.48550/arXiv.2507.00829 / Published by ArXiv / Version released on 2025-07-01 / on (web) Publishing site
- Exploring Collaboration Patterns and Strategies in Human-AI Co-creation through the Lens of Agency: A Scoping Review of the Top-tier HCI Literature / 2507.06000 / ISBN:https://doi.org/10.48550/arXiv.2507.06000 / Published by ArXiv / Version released on 2025-09-26 / on (web) Publishing site
- When Large Language Models Meet Law: Dual-Lens Taxonomy, Technical Advances, and Ethical Governance / 2507.07748 / ISBN:https://doi.org/10.48550/arXiv.2507.07748 / Published by ArXiv / Version released on 2025-07-10 / on (web) Publishing site
- The Evolving Role of Large Language Models in Scientific Innovation: Evaluator, Collaborator, and Scientist / 2507.11810 / ISBN:https://doi.org/10.48550/arXiv.2507.11810 / Published by ArXiv / Version released on 2025-07-16 / on (web) Publishing site
- Culling Misinformation from Gen AI: Toward Ethical Curation and Refinement / 2507.14242 / ISBN:https://doi.org/10.48550/arXiv.2507.14242 / Published by ArXiv / Version released on 2025-07-17 / on (web) Publishing site
- The Silicon Reasonable Person: Can AI Predict How Ordinary People Judge Reasonableness? / 2508.02766 / ISBN:https://doi.org/10.48550/arXiv.2508.02766 / Published by ArXiv / Version released on 2025-08-04 / on (web) Publishing site
- A Comprehensive Survey of Self-Evolving AI Agents: A New Paradigm Bridging Foundation Models and Lifelong Agentic Systems / 2508.07407 / ISBN:https://doi.org/10.48550/arXiv.2508.07407 / Published by ArXiv / Version released on 2025-08-31 / on (web) Publishing site
- TVS Sidekick: Challenges and Practical Insights from Deploying Large Language Models in the Enterprise / 2509.26482 / ISBN:https://doi.org/10.48550/arXiv.2509.26482 / Published by ArXiv / Version released on 2025-09-30 / on (web) Publishing site
- AI Adoption Across Mission-Driven Organizations / 2510.03868 / ISBN:https://doi.org/10.48550/arXiv.2510.03868 / Published by ArXiv / Version released on 2025-10-04 / on (web) Publishing site
- Human-aligned AI Model Cards with Weighted Hierarchy Architecture / 2510.06989 / ISBN:https://doi.org/10.48550/arXiv.2510.06989 / Published by ArXiv / Version released on 2025-10-08 / on (web) Publishing site
- Making Power Explicable in AI: Analyzing, Understanding, and Redirecting Power to Operationalize Ethics in AI Technical Practice / 2510.10588 / ISBN:https://doi.org/10.48550/arXiv.2510.10588 / Published by ArXiv / Version released on 2025-10-12 / on (web) Publishing site
- How Can AI Augment Access to Justice? Public Defenders' Perspectives on AI Adoption / 2510.22933 / ISBN:https://doi.org/10.48550/arXiv.2510.22933 / Published by ArXiv / Version released on 2025-10-27 / on (web) Publishing site
- BeautyGuard: Designing a Multi-Agent Roundtable System for Proactive Beauty Tech Compliance through Stakeholder Collaboration
/ 2511.12645 / ISBN:https://doi.org/10.48550/arXiv.2511.12645 / Version released on 2025-11-18 / on (web) Publishing site
- Cross-cultural value alignment frameworks for responsible AI governance: Evidence from China-West comparative analysis / 2511.17256 / ISBN:https://doi.org/10.48550/arXiv.2511.17256 / Version released on 2025-11-21 / on (web) Publishing site
- Irresponsible AI: big tech's influence on AI research and associated impacts / 2512.03077 / ISBN:https://doi.org/10.48550/arXiv.2512.03077 / Version released on 2025-11-27 / on (web) Publishing site
- The Decision Path to Control AI Risks Completely: Fundamental Control Mechanisms for AI Governance / 2512.04489 / ISBN:https://doi.org/10.48550/arXiv.2512.04489 / Version released on 2025-12-04 / on (web) Publishing site
_