if you need more than one keyword, modify and separate by underscore _
the list of search keywords can be up to 50 characters long
if you modify the keywords, press enter within the field to confirm the new search key
Tag: distributions
Bibliography items where occurs: 109
- Normative Ethics Principles for Responsible AI Systems: Taxonomy and Future Directions / 2208.12616 / ISBN:https://doi.org/10.48550/arXiv.2208.12616 / Published by ArXiv / on (web) Publishing site
- 3 Taxonomy of ethical principles
- Collect, Measure, Repeat: Reliability Factors for Responsible AI Data Collection / 2308.12885 / ISBN:https://doi.org/10.48550/arXiv.2308.12885 / Published by ArXiv / on (web) Publishing site
- 3 Reliability and Reproducibility Metrics for
Responsible Data Collection
- Building Trust in Conversational AI: A Comprehensive Review and Solution Architecture for Explainable, Privacy-Aware Systems using LLMs and Knowledge Graph / 2308.13534 / ISBN:https://doi.org/10.48550/arXiv.2308.13534 / Published by ArXiv / on (web) Publishing site
- III. Comprehensive review of state-of-the-art LLMs
- The Promise and Peril of Artificial Intelligence -- Violet Teaming Offers a Balanced Path Forward / 2308.14253 / ISBN:https://doi.org/10.48550/arXiv.2308.14253 / Published by ArXiv / on (web) Publishing site
- 5 Research directions in AI safety and violet teaming
- Ethical Framework for Harnessing the Power of AI in Healthcare and Beyond / 2309.00064 / ISBN:https://doi.org/10.48550/arXiv.2309.00064 / Published by ArXiv / on (web) Publishing site
- References
- Pathway to Future Symbiotic Creativity / 2209.02388 / ISBN:https://doi.org/10.48550/arXiv.2209.02388 / Published by ArXiv / on (web) Publishing site
- Part 3 - 3 Comparison with Generative Models
- The Cambridge Law Corpus: A Corpus for Legal AI Research / 2309.12269 / ISBN:https://doi.org/10.48550/arXiv.2309.12269 / Published by ArXiv / on (web) Publishing site
- Cambridge Law Corpus: Datasheet
- The AI Incident Database as an Educational Tool to Raise Awareness of AI Harms: A Classroom Exploration of Efficacy, Limitations, & Future Improvements / 2310.06269 / ISBN:https://doi.org/10.48550/arXiv.2310.06269 / Published by ArXiv / on (web) Publishing site
- References
- Ethics of Artificial Intelligence and Robotics in the Architecture, Engineering, and Construction Industry / 2310.05414 / ISBN:https://doi.org/10.48550/arXiv.2310.05414 / Published by ArXiv / on (web) Publishing site
- 7. Future Research Direction
- Autonomous Vehicles an overview on system, cyber security, risks, issues, and a way forward / 2309.14213 / ISBN:https://doi.org/10.48550/arXiv.2309.14213 / Published by ArXiv / on (web) Publishing site
- 2. Autonomous vehicles
- Toward an Ethics of AI Belief / 2304.14577 / ISBN:https://doi.org/10.48550/arXiv.2304.14577 / Published by ArXiv / on (web) Publishing site
- 2. “Belief” in Humans and AI
3. Proposed Novel Topics in an Ethics of AI Belief - A Conceptual Algorithm for Applying Ethical Principles of AI to Medical Practice / 2304.11530 / ISBN:https://doi.org/10.48550/arXiv.2304.11530 / Published by ArXiv / on (web) Publishing site
- 3 Ethical datasets and algorithm development guidelines
- FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare / 2309.12325 / ISBN:https://doi.org/10.48550/arXiv.2309.12325 / Published by ArXiv / on (web) Publishing site
- FUTURE-AI GUIDELINE
- Language Agents for Detecting Implicit Stereotypes in Text-to-Image Models at Scale / 2310.11778 / ISBN:https://doi.org/10.48550/arXiv.2310.11778 / Published by ArXiv / on (web) Publishing site
- 5 Related Work
- Kantian Deontology Meets AI Alignment: Towards Morally Grounded Fairness Metrics / 2311.05227 / ISBN:https://doi.org/10.48550/arXiv.2311.05227 / Published by ArXiv / on (web) Publishing site
- 3 Measuring Fairness Metrics
4 Deontological AI Alignment - Prudent Silence or Foolish Babble? Examining Large Language Models' Responses to the Unknown / 2311.09731 / ISBN:https://doi.org/10.48550/arXiv.2311.09731 / Published by ArXiv / on (web) Publishing site
- D Additional Results and Figures
- Survey on AI Ethics: A Socio-technical Perspective / 2311.17228 / ISBN:https://doi.org/10.48550/arXiv.2311.17228 / Published by ArXiv / on (web) Publishing site
- References
- Control Risk for Potential Misuse of Artificial Intelligence in Science / 2312.06632 / ISBN:https://doi.org/10.48550/arXiv.2312.06632 / Published by ArXiv / on (web) Publishing site
- Appendix D Details of Benchmark Results
- Exploring the Frontiers of LLMs in Psychological Applications: A Comprehensive Review / 2401.01519 / ISBN:https://doi.org/10.48550/arXiv.2401.01519 / Published by ArXiv / on (web) Publishing site
- 7. Challenges and future directions
- Synthetic Data in AI: Challenges, Applications, and Ethical Implications / 2401.01629 / ISBN:https://doi.org/10.48550/arXiv.2401.01629 / Published by ArXiv / on (web) Publishing site
- 2. The generation of synthetic data
3. The usage of synthetic data - Towards Responsible AI in Banking: Addressing Bias for Fair Decision-Making / 2401.08691 / ISBN:https://doi.org/10.48550/arXiv.2401.08691 / Published by ArXiv / on (web) Publishing site
- 3 Bias on demand: a framework for generating synthetic data with bias
II Mitigating bias - 5 Fairness mitigation
9 Towards fairness through time - Beyond principlism: Practical strategies for ethical AI use in research practices / 2401.15284 / ISBN:https://doi.org/10.48550/arXiv.2401.15284 / Published by ArXiv / on (web) Publishing site
- 1 The “Triple-Too” problem of AI ethics
- Generative Artificial Intelligence in Healthcare: Ethical Considerations and Assessment Checklist / 2311.02107 / ISBN:https://doi.org/10.48550/arXiv.2311.02107 / Published by ArXiv / on (web) Publishing site
- Reference
- How do machines learn? Evaluating the AIcon2abs method / 2401.07386 / ISBN:https://doi.org/10.48550/arXiv.2401.07386 / Published by ArXiv / on (web) Publishing site
- References
- User Modeling and User Profiling: A Comprehensive Survey / 2402.09660 / ISBN:https://doi.org/10.48550/arXiv.2402.09660 / Published by ArXiv / on (web) Publishing site
- 4 Current Taxonomy
- Copyleft for Alleviating AIGC Copyright Dilemma: What-if Analysis, Public Perception and Implications / 2402.12216 / ISBN:https://doi.org/10.48550/arXiv.2402.12216 / Published by ArXiv / on (web) Publishing site
- 5 Public Perception: A Survey Method
- The METRIC-framework for assessing data quality for trustworthy AI in medicine: a systematic review / 2402.13635 / ISBN:https://doi.org/10.48550/arXiv.2402.13635 / Published by ArXiv / on (web) Publishing site
- METRIC-framework for medical training data
Methods - The Minimum Information about CLinical Artificial Intelligence Checklist for Generative Modeling Research (MI-CLAIM-GEN) / 2403.02558 / ISBN:https://doi.org/10.48550/arXiv.2403.02558 / Published by ArXiv / on (web) Publishing site
- Part 5. Interpretability of generative models
- Moral Sparks in Social Media Narratives / 2310.19268 / ISBN:https://doi.org/10.48550/arXiv.2310.19268 / Published by ArXiv / on (web) Publishing site
- 5. Results
- Safeguarding Marketing Research: The Generation, Identification, and Mitigation of AI-Fabricated Disinformation / 2403.14706 / ISBN:https://doi.org/10.48550/arXiv.2403.14706 / Published by ArXiv / on (web) Publishing site
- Methodology
Data
Results - A Critical Survey on Fairness Benefits of Explainable AI / 2310.13007 / ISBN:https://doi.org/10.1145/3630106.3658990 / Published by ArXiv / on (web) Publishing site
- 4 Critical Survey
- AI Alignment: A Comprehensive Survey / 2310.19852 / ISBN:https://doi.org/10.48550/arXiv.2310.19852 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Learning from Feedback
3 Learning under Distribution Shift - Large Language Model Supply Chain: A Research Agenda / 2404.12736 / ISBN:https://doi.org/10.48550/arXiv.2404.12736 / Published by ArXiv / on (web) Publishing site
- 4 LLM Lifecycle
- Modeling Emotions and Ethics with Large Language Models / 2404.13071 / ISBN:https://doi.org/10.48550/arXiv.2404.13071 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- Fairness in AI: challenges in bridging the gap between algorithms and law / 2404.19371 / ISBN:https://doi.org/10.48550/arXiv.2404.19371 / Published by ArXiv / on (web) Publishing site
- IV. Criteria for the Selection of Fairness Methods
- A Survey on Large Language Models for Critical Societal Domains: Finance, Healthcare, and Law / 2405.01769 / ISBN:https://doi.org/10.48550/arXiv.2405.01769 / Published by ArXiv / on (web) Publishing site
- 6 Ethics
- Unsocial Intelligence: an Investigation of the Assumptions of AGI Discourse / 2401.13142 / ISBN:https://doi.org/10.48550/arXiv.2401.13142 / Published by ArXiv / on (web) Publishing site
- 4 Towards Contextualized, Politically Legitimate, and Social Intelligence
- Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback / 2404.10271 / ISBN:https://doi.org/10.48550/arXiv.2404.10271 / Published by ArXiv / on (web) Publishing site
- 3. What Are the Collective Decision Problems
and their Alternatives in this Context?
- Integrating Emotional and Linguistic Models for Ethical Compliance in Large Language Models / 2405.07076 / ISBN:https://doi.org/10.48550/arXiv.2405.07076 / Published by ArXiv / on (web) Publishing site
- 2 Related Work
4 Pilot Studies - When AI Eats Itself: On the Caveats of Data Pollution in the Era of Generative AI
/ 2405.09597 / ISBN:https://doi.org/10.48550/arXiv.2405.09597 / Published by ArXiv / on (web) Publishing site
- 2 RQ1: What Happens When AI Eats Itself ?
- Towards Clinical AI Fairness: Filling Gaps in the Puzzle / 2405.17921 / ISBN:https://doi.org/10.48550/arXiv.2405.17921 / Published by ArXiv / on (web) Publishing site
- Methods in clinical AI fairness research
- Responsible AI for Earth Observation / 2405.20868 / ISBN:https://doi.org/10.48550/arXiv.2405.20868 / Published by ArXiv / on (web) Publishing site
- 3 Secure AI in EO: Focusing on Defense Mechanisms, Uncertainty Modeling and
Explainability
References - Evaluating AI fairness in credit scoring with the BRIO tool
/ 2406.03292 / ISBN:https://doi.org/10.48550/arXiv.2406.03292 / Published by ArXiv / on (web) Publishing site
- 2 Preliminary Analysis
4 Fairness violation analysis in BRIO - Federated Learning driven Large Language Models for Swarm Intelligence: A Survey / 2406.09831 / ISBN:https://doi.org/10.48550/arXiv.2406.09831 / Published by ArXiv / on (web) Publishing site
- III. Federated LLMs for Smarm Intelligence
- Artificial intelligence, rationalization, and the limits of control in the public sector: the case of tax policy optimization / 2407.05336 / ISBN:https://doi.org/10.48550/arXiv.2407.05336 / Published by ArXiv / on (web) Publishing site
- 5. Freedom, equality, and self-determination in the iron cage
- Unmasking Bias in AI: A Systematic Review of Bias Detection and Mitigation Strategies in Electronic Health Record-based Models / 2310.19917 / ISBN:https://doi.org/10.48550/arXiv.2310.19917 / Published by ArXiv / on (web) Publishing site
- References
- Assurance of AI Systems From a Dependability Perspective / 2407.13948 / ISBN:https://doi.org/10.48550/arXiv.2407.13948 / Published by ArXiv / on (web) Publishing site
- 2 Assurance for Systems Extended with AI and ML
- Deepfake Media Forensics: State of the Art and Challenges Ahead / 2408.00388 / ISBN:https://doi.org/10.48550/arXiv.2408.00388 / Published by ArXiv / on (web) Publishing site
- References
- The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources / 2406.16746 / ISBN:https://doi.org/10.48550/arXiv.2406.16746 / Published by ArXiv / on (web) Publishing site
- 1 Introduction
- Recent Advances in Generative AI and Large Language Models: Current Status, Challenges, and Perspectives / 2407.14962 / ISBN:https://doi.org/10.48550/arXiv.2407.14962 / Published by ArXiv / on (web) Publishing site
- II. Generative AI
IV. Challenges of Generative AI and LLMs - Speculations on Uncertainty and Humane Algorithms / 2408.06736 / ISBN:https://doi.org/10.48550/arXiv.2408.06736 / Published by ArXiv / on (web) Publishing site
- 2 The Numbers of the Future
- Has Multimodal Learning Delivered Universal Intelligence in Healthcare? A Comprehensive Survey / 2408.12880 / ISBN:https://doi.org/10.48550/arXiv.2408.12880 / Published by ArXiv / on (web) Publishing site
- 3 Multimodal Medical Studies
6 Discussions of Current Studies - Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems / 2408.15550 / ISBN:https://doi.org/10.48550/arXiv.2408.15550 / Published by ArXiv / on (web) Publishing site
- 4 Biases
5 Trustworthy and Responsible AI in Human-centric Applications - Trust and ethical considerations in a multi-modal, explainable AI-driven chatbot tutoring system: The case of collaboratively solving Rubik's Cubeà / 2402.01760 / ISBN:https://doi.org/10.48550/arXiv.2402.01760 / Published by ArXiv / on (web) Publishing site
- D. CausalRating: A Tool To Rate Sentiments Analysis Systems for Bias
- Don't Kill the Baby: The Case for AI in Arbitration / 2408.11608 / ISBN:https://doi.org/10.48550/arXiv.2408.11608 / Published by ArXiv / on (web) Publishing site
- II. The Critics Are Killing the Baby
- On the Creativity of Large Language Models / 2304.00008 / ISBN:https://doi.org/10.48550/arXiv.2304.00008 / Published by ArXiv / on (web) Publishing site
- 3 Large Language Models and Boden’s Three Criteria
- Artificial intelligence to advance Earth observation: : A review of models, recent trends, and pathways forward / 2305.08413 / ISBN:https://doi.org/10.48550/arXiv.2305.08413 / Published by ArXiv / on (web) Publishing site
- Part I Modelling - Machine learning, computer vision
and processing
1
Machine learning and computer vision for Earth observation
- Large language models as linguistic simulators and cognitive models in human research / 2402.04470 / ISBN:https://doi.org/10.48550/arXiv.2402.04470 / Published by ArXiv / on (web) Publishing site
- Six fallacies that misinterpret language models
- Navigating LLM Ethics: Advancements, Challenges, and Future Directions / 2406.18841 / ISBN:https://doi.org/10.48550/arXiv.2406.18841 / Published by ArXiv / on (web) Publishing site
- II. Conceptualization and frameworks
- ValueCompass: A Framework of Fundamental Values for Human-AI Alignment / 2409.09586 / ISBN:https://doi.org/10.48550/arXiv.2409.09586 / Published by ArXiv / on (web) Publishing site
- 4 Operationalizing ValueCompass: Methods to Measure Value Alignment of Humans and AI
5 Findings with ValueCompass: The Status Quo of Human-AI Value Alignment - Artificial Human Intelligence: The role of Humans in the Development of Next Generation AI / 2409.16001 / ISBN:https://doi.org/10.48550/arXiv.2409.16001 / Published by ArXiv / on (web) Publishing site
- III. Path Leading to AHI
- Ethical and Scalable Automation: A Governance and Compliance Framework for Business Applications / 2409.16872 / ISBN:https://doi.org/10.48550/arXiv.2409.16872 / Published by ArXiv / on (web) Publishing site
- Abstract
2. Literature Review
3. Methodology
4. Framework Development
6. Conclusion - Safety challenges of AI in medicine / 2409.18968 / ISBN:https://doi.org/10.48550/arXiv.2409.18968 / Published by ArXiv / on (web) Publishing site
- 2 Inherent problems of AI related to medicine
- Responsible AI in Open Ecosystems: Reconciling Innovation with Risk Assessment and Disclosure / 2409.19104 / ISBN:https://doi.org/10.48550/arXiv.2409.19104 / Published by ArXiv / on (web) Publishing site
- 4 Results
- Clinnova Federated Learning Proof of Concept: Key Takeaways from a Cross-border Collaboration / 2410.02443 / ISBN:https://doi.org/10.48550/arXiv.2410.02443 / Published by ArXiv / on (web) Publishing site
- V. Proof of Concepts 2
- DailyDilemmas: Revealing Value Preferences of LLMs with Quandaries of Daily Life / 2410.02683 / ISBN:https://doi.org/10.48550/arXiv.2410.02683 / Published by ArXiv / on (web) Publishing site
- 3 An Analysis of Synthetically Generated Dilemma Vignettes and Human Values in Daily Dilemmas
4 Unweiling LLM's Value Preferences Through Action Choices in Everyday Dilemmas - AI-Press: A Multi-Agent News Generating and Feedback Simulation System Powered by Large Language Models / 2410.07561 / ISBN:https://doi.org/10.48550/arXiv.2410.07561 / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
4 Experimental Setup
5 Results
6 Conclusion - A Simulation System Towards Solving Societal-Scale Manipulation / 2410.13915 / ISBN:https://doi.org/10.48550/arXiv.2410.13915 / Published by ArXiv / on (web) Publishing site
- 4 Analysis
- The Cat and Mouse Game: The Ongoing Arms Race Between Diffusion Models and Detection Methods / 2410.18866 / ISBN:https://doi.org/10.48550/arXiv.2410.18866 / Published by ArXiv / on (web) Publishing site
- III. Detection Methods Based on Image Analysis
VI. Evaluation Metrics - TRIAGE: Ethical Benchmarking of AI Models Through Mass Casualty Simulations / 2410.18991 / ISBN:https://doi.org/10.48550/arXiv.2410.18991 / Published by ArXiv / on (web) Publishing site
- 4 Discussion
- Democratizing Reward Design for Personal and Representative Value-Alignment / 2410.22203 / ISBN:https://doi.org/10.48550/arXiv.2410.22203 / Published by ArXiv / on (web) Publishing site
- 5 Results: Study 1 - Multi-Agent Apple Farming
6 Results: Study 2 - The Moral Machine - I Always Felt that Something Was Wrong.: Understanding Compliance Risks and Mitigation Strategies when Professionals Use Large Language Models / 2411.04576 / ISBN:https://doi.org/10.48550/arXiv.2411.04576 / Published by ArXiv / on (web) Publishing site
- Appendices
- Privacy-Preserving Video Anomaly Detection: A Survey / 2411.14565 / ISBN:https://doi.org/10.48550/arXiv.2411.14565 / Published by ArXiv / on (web) Publishing site
- V. Edge-Cloud Intelligencve Empowered P2VAD
VII. Discussion - Good intentions, unintended consequences: exploring forecasting harms
/ 2411.16531 / ISBN:https://doi.org/10.48550/arXiv.2411.16531 / Published by ArXiv / on (web) Publishing site
- 6 A Research agenda
- Can OpenAI o1 outperform humans in higher-order cognitive thinking? / 2412.05753 / ISBN:https://doi.org/10.48550/arXiv.2412.05753 / Published by ArXiv / on (web) Publishing site
- 2 Methods
- Political-LLM: Large Language Models in Political Science / 2412.06864 / ISBN:https://doi.org/10.48550/arXiv.2412.06864 / Published by ArXiv / on (web) Publishing site
- 5 Technical Foundations for LLM Applications in Political Science
- Responsible AI in the Software Industry: A Practitioner-Centered Perspective / 2412.07620 / ISBN:https://doi.org/10.48550/arXiv.2412.07620 / Published by ArXiv / on (web) Publishing site
- III. Findings
- Clio: Privacy-Preserving Insights into Real-World AI Use / 2412.13678 / ISBN:https://doi.org/10.48550/arXiv.2412.13678 / Published by ArXiv / on (web) Publishing site
- 2 High-level design of Clio
4 Clio for safety
6 Risks, ethical considerations, and mitigations - Towards Friendly AI: A Comprehensive Review and New Perspectives on Human-AI Alignment / 2412.15114 / ISBN:https://doi.org/10.48550/arXiv.2412.15114 / Published by ArXiv / on (web) Publishing site
- IV. Applications
- Large Language Model Safety: A Holistic Survey / 2412.17686 / ISBN:https://doi.org/10.48550/arXiv.2412.17686 / Published by ArXiv / on (web) Publishing site
- 3 Value Misalignment
9 Technology Roadmaps / Strategies to LLM Safety in Practice - INFELM: In-depth Fairness Evaluation of Large Text-To-Image Models / 2501.01973 / ISBN:https://doi.org/10.48550/arXiv.2501.01973 / Published by ArXiv / on (web) Publishing site
- 4 Method
- Datasheets for Healthcare AI: A Framework for Transparency and Bias Mitigation / 2501.05617 / ISBN:https://doi.org/10.48550/arXiv.2501.05617 / Published by ArXiv / on (web) Publishing site
- 3. Developing an Improved Machine-Readable Datasheet
- Hybrid Approaches for Moral Value Alignment in AI Agents: a Manifesto / 2312.01818 / ISBN:https://doi.org/10.48550/arXiv.2312.01818 / Published by ArXiv / on (web) Publishing site
- 5. Outlook & Implications
- Towards A Litmus Test for Common Sense / 2501.09913 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- 7 Mathematical Formulation for LLMs and AI
- Uncovering Bias in Foundation Models: Impact, Testing, Harm, and Mitigation / 2501.10453 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- 1 Introduction
2 Results and Discussion
4 Method
Supplementary - Bias in Decision-Making for AI's Ethical Dilemmas: A Comparative Study of ChatGPT and Claude / 2501.10484 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- Methodology
Results - Examining the Expanding Role of Synthetic Data Throughout the AI Development Pipeline / 2501.18493 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- 4 Findings
- DebiasPI: Inference-time Debiasing by Prompt Iteration of a Text-to-Image Generative Model / 2501.18642 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- Abstract
1 Introduction
2 Related Work
3 Method
4 Experiments and Results - FairT2I: Mitigating Social Bias in Text-to-Image Generation via Large Language Model-Assisted Detection and Attribute Rebalancing / 2502.03826 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- 1. Introduction
4. Methodologies
6. Results - Safety at Scale: A Comprehensive Survey of Large Model Safety / 2502.05206 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- 2 Vision Foundation Model Safety
6 Diffusion Model Safety
References - The Odyssey of the Fittest: Can Agents Survive and Still Be Good? / 2502.05442 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- Introduction
Results - Comprehensive Framework for Evaluating Conversational AI Chatbots / 2502.06105 / ISBN:https://doi.org/10.48550/arXiv. / Published by ArXiv / on (web) Publishing site
- IV. Key Metrics that capture the essence of ethical and governance complianc
- From large language models to multimodal AI: A scoping review on the potential of generative AI in medicine
/ 2502.09242 / ISBN:https://doi.org/10.48550/arXiv.2502.09242 / Published by ArXiv / on (web) Publishing site
- 6 Evaluation metrics for generative AI in medicine
- Multi-Agent Risks from Advanced AI / 2502.14143 / ISBN:https://doi.org/10.48550/arXiv.2502.14143 / Published by ArXiv / on (web) Publishing site
- References
- On the Trustworthiness of Generative Foundation Models: Guideline, Assessment, and Perspective / 2502.14296 / ISBN:https://doi.org/10.48550/arXiv.2502.14296 / Published by ArXiv / on (web) Publishing site
- 5 Benchmarking Text-to-Image Models
- Surgical Scene Understanding in the Era of Foundation AI Models: A Comprehensive Review / 2502.14886 / ISBN:https://doi.org/10.48550/arXiv.2502.14886 / Published by ArXiv / on (web) Publishing site
- II. Background and Challenges
- Fair Foundation Models for Medical Image Analysis: Challenges and Perspectives / 2502.16841 / ISBN:https://doi.org/10.48550/arXiv.2502.16841 / Published by ArXiv / on (web) Publishing site
- 4 Enviromental Impact
- Comprehensive Analysis of Transparency and Accessibility of ChatGPT, DeepSeek, And other SoTA Large Language Models / 2502.18505 / ISBN:https://doi.org/10.48550/arXiv.2502.18505 / Published by ArXiv / on (web) Publishing site
- 2. Methodology
3. Results - Vision Language Models in Medicine / 2503.01863 / ISBN:https://doi.org/10.48550/arXiv.2503.01863 / Published by ArXiv / on (web) Publishing site
- IV. VLM Benchmarking and Evaluations
- Twenty Years of Personality Computing: Threats, Challenges and Future Directions / 2503.02082 / ISBN:https://doi.org/10.48550/arXiv.2503.02082 / Published by ArXiv / on (web) Publishing site
- References
- AI Automatons: AI Systems Intended to Imitate Humans / 2503.02250 / ISBN:https://doi.org/10.48550/arXiv.2503.02250 / Published by ArXiv / on (web) Publishing site
- 2 Background & Related Work
3 Conceptual Framework for AI Automatons - Medical Hallucinations in Foundation Models and Their Impact on Healthcare / 2503.05777 / ISBN:https://doi.org/10.48550/arXiv.2503.05777 / Published by ArXiv / on (web) Publishing site
- 5 Mitigation Strategies
7 Annotations of Medical Hallucination with Clinical Case Records - Generative AI in Transportation Planning: A Survey / 2503.07158 / ISBN:https://doi.org/10.48550/arXiv.2503.07158 / Published by ArXiv / on (web) Publishing site
- 5 Technical Foundations for Generative AI Applications in Transportation Planning
- Detecting Dataset Bias in Medical AI: A Generalized and Modality-Agnostic Auditing Framework / 2503.09969 / ISBN:https://doi.org/10.48550/arXiv.2503.09969 / Published by ArXiv / on (web) Publishing site
- 4 Methods
- Synthetic Data for Robust AI Model Development in Regulated Enterprises / 2503.12353 / ISBN:https://doi.org/10.48550/arXiv.2503.12353 / Published by ArXiv / on (web) Publishing site
- Synthetic Data Generation for
Enterprise AI Development
Case Studies in Regulated Industries - AI Identity, Empowerment, and Mindfulness in Mitigating Unethical AI Use / 2503.20099 / ISBN:https://doi.org/10.48550/arXiv.2503.20099 / Published by ArXiv / on (web) Publishing site
- Literature Review
- BEATS: Bias Evaluation and Assessment Test Suite for Large Language Models
/ 2503.24310 / ISBN:https://doi.org/10.48550/arXiv.2503.24310 / Published by ArXiv / on (web) Publishing site
- 3 Key Findings
- Bridging the Gap: Integrating Ethics and Environmental Sustainability in AI Research and Practice / 2504.00797 / ISBN:https://doi.org/10.48550/arXiv.2504.00797 / Published by ArXiv / on (web) Publishing site
- References
- Ethical AI on the Waitlist: Group Fairness Evaluation of LLM-Aided Organ Allocation / 2504.03716 / ISBN:https://doi.org/10.48550/arXiv.2504.03716 / Published by ArXiv / on (web) Publishing site
- 2 Methods
3 Results