Convenor
Convenor's affiliation
Francesca Iandolo
Sapienza University of Rome
Co-convenors
Giuliano Maielli, Antonio La Sala, Pietro Vito
Abstract
AI now mediates discovery, concept selection, and market validation across R&D pipelines. Yet organizations increasingly face “mistaken AI”: hallucinated outputs that look convincing but are wrong; echo-chamber dynamics that amplify prevailing assumptions; and filter-bubble effects that narrow exploratory search. This track asks: how can firms preserve creativity while building resilience to these socio-technical failure modes? We welcome conceptual, empirical, and design-science contributions that (i) characterize mistaken-AI risks at key R&D decision points; (ii) evaluate mitigation levers—data provenance and lineage, plural information sourcing, retrieval-augmented generation, red-teaming, and human-in-the-loop escalation; and (iii) demonstrate governance aligned with emerging standards (e.g., NIST AI RMF, ISO/IEC 42001). The goal is to bridge computer-science evidence with management theory and practice, producing testable propositions, robust metrics (e.g., information diversity), and actionable toolkits that make AI-enabled R&D both more imaginative and more reliable [1–3].
Description
Generative and predictive AI systems are reshaping what organizations explore, believe, and build. Alongside real productivity gains, evidence from computer science and information studies shows systematic failure modes that matter for R&D governance: (1) hallucinations—plausible but nonfactual outputs that can pass superficial verification; (2) echo-chamber dynamics—feedback loops in data and social contexts that reinforce dominant assumptions; and (3) filter-bubble effects—algorithmic curation that compresses the search space and blindsides teams to weak signals [4–6]. This track advances a management-centric research agenda that translates these findings into R&D settings. We invite work that maps mistaken-AI risks to decision points such as opportunity discovery (e.g., literature/patent landscaping), concept selection (portfolio down-select), technical due diligence, supplier scouting, and voice-of-customer synthesis—where a single high-confidence hallucination or a narrowed information diet can cascade into costly errors.
We particularly welcome design and evaluation of mitigation levers with organizational relevance:
• Data provenance & lineage to make inputs auditable and contestable, building on recent multimodal provenance audits [7].
• Plural information sourcing and information-diversity metrics to counter enclosure effects, grounded in evidence that echo/filter phenomena are context-dependent and measurable rather than purely anecdotal [8].
• Retrieval-augmented generation (RAG) and verification workflows that reduce unsupported generations in knowledge-intensive tasks [9].
• AI red-teaming (internal/external) to proactively surface failure modes before deployment [10].
• Human-in-the-loop escalation rules and burden-of-proof designs, recognizing how incorrect AI suggestions can sway expert judgment [11].
• We also encourage studies aligning these practices with emerging governance frameworks, including the NIST AI Risk Management Framework (functions Govern–Map–Measure–Manage; generative-AI profile) and ISO/IEC 42001 (AI management systems). Submissions may operationalize these standards for R&D contexts (roles, controls, KPIs), compare adoption barriers, or propose lightweight profiles for labs and early-stage ventures [12,13].
In alignment with recent R&D Management Conference research, such as von Krogh et al. (2024) and Magnusson et al. (2023), this track emphasizes how AI affects knowledge recombination, exploration-exploitation balance, and search heuristics within innovation systems [14–17]. Contributions bridging dynamic capabilities theory and socio-technical AI governance are especially welcome, as they align with ongoing RADMA debates on responsible digital transformation (R&D Management Conference, 2025; von Delft et al., 2022).
Bridging disciplines, we seek to connect CS/HCI evidence with innovation-management theory (e.g., dynamic capabilities, search breadth/depth, ambidexterity). We welcome theory-building papers and empirical tests that explain when mitigation levers expand creative recombination without slowing throughput, complementing prior reviews of AI in innovation with a sharper focus on hype-resilient governance [18–21].
