Harnessing Event-Driven and Multi-Agent Architectures for Complex Workflows in Generative AI System
Mary Grygleski

TL;DR

Generative AI systems struggle with complex business workflows due to limited adaptability and scalability. The session presented event-driven architectures and multi-agent systems as solutions to enhance real-time processing and decentralized decision-making in AI workflows. These approaches promise more efficient, adaptive, and scalable generative AI systems.

Opening

In today's rapidly evolving landscape, generative AI systems often face challenges when tasked with managing complex business workflows. The lack of adaptability and scalability in traditional AI systems limits their effectiveness in dynamic environments. Mary Grygleski, Director of Emerging Technologies at Callibrity, addressed these pressing issues at the Data Streaming Summit 2025, exploring how event-driven architectures and multi-agent systems can bridge this gap and transform generative AI into a more robust and flexible solution.

What You'll Learn (Key Takeaways)

  • Event-Driven Architectures for AI – These systems enhance the responsiveness and adaptability of AI by processing data in real-time and allowing decentralized decision-making.
  • Multi-Agent Systems in AI Workflows – Employing multiple specialized agents can break down complex tasks into manageable subtasks, enabling more efficient task execution and increased robustness.
  • Integration of AI and Event-Driven Systems – By leveraging event-driven approaches, AI systems can handle dynamic data flows, enhancing scalability and flexibility in business applications.
  • Challenges and Considerations – While offering significant benefits, event-driven and multi-agent systems pose challenges in debugging, tracking data flow, and ensuring consistency, necessitating robust observability and error-handling mechanisms.

Q&A Highlights

Q: Do we have any sort of framework to understand the capabilities of different LLMs (Large Language Models)?
A: Yes, LLMs vary in modality support, such as text, image, audio, video, and code. Some models are multimodal, but understanding their capabilities is crucial, and users should directly query LLMs for their supported modalities.

Q: Is there a way to automate the discovery of an LLM's capabilities?
A: It is possible to ask LLMs about their capabilities directly. However, smaller models may not be reliable due to higher chances of misinformation, suggesting the need for a more objective schema or grammar to verify capabilities.

Q: How do event-driven architectures and agentic AI work together, and are there frameworks for this integration?
A: While the potential for integration is vast, frameworks and patterns are currently limited. More research and development are needed to create samples and test multi-agentic calls in event-driven systems for LLMs.

Mary Grygleski
Streaming Developer Advocate at DataStax, Java Champion, President of Chicago-JUG

Mary is a Java Champion, and the Director of Emerging Technologies at Callibrity, a consulting firm based in Cincinnati, Ohio. She started as an engineer in Unix/C, then transitioned to Java around 2000 and has never looked back since then. After 20+ years of being a software engineer and technical architect, she discovered her true passion in developer and customer advocacy. Most recently she has serviced companies of various sizes such as IBM, US Cellular, Bank of America, Chicago Mercantile Exchange, in topic areas that included Java, GenAI, Streaming systems, Open source, Cloud and Distributed messaging systems. She is also a very active tech community leader outside of her day job. She is the President of the Chicago Java Users Group (CJUG), and the Chicago Chapter Co-Lead for AICamp.

Newsletter

Our strategies and tactics delivered right to your inbox

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.