Post by DubsTech

1,579 followers

Register Now: https://luma.com/t0ztjbuv Large Language Models like ChatGPT often feel intelligent. They explain complex ideas fluently, mimic reasoning, and respond with confidence. But does fluency imply understanding, or are we mistaking polished pattern recognition for true intelligence? In this talk, Anjali Kadiyala will explore the illusion of intelligence in LLMs by unpacking the difference between pattern matching and reasoning, compression versus comprehension, and emergence without understanding. She’ll examine why LLMs can convincingly appear to reason, how human cognitive biases amplify our trust in fluent systems, and where these models succeed or fail under real-world complexity. Finally, Anjali will discuss why this illusion is both powerful and risky — and how the most effective use of LLMs comes not from treating them as thinking entities, but as tools within human-in-the-loop systems that amplify, rather than replace, human judgment. Register Now: https://luma.com/t0ztjbuv

Post content