Partiful logo

The Music AI Can't Hear: What's Actually Missing from the Future of Sound - #BOSTechWeek

Hosted by
Λœβ—‘Λœ
ツ
90/100 spots left
πŸŒ™ Tech Week Boston's closing night. An intimate fireside to end the week on the questions most panels have been skipping. What's actually missing from the future of sound? The conversation around AI and music has been hijacked by one topic: generative music. But generation is just one layer of a much deeper set of unsolved, under-discussed, and commercially consequential problems. Today's music AI models can't reliably produce music outside Western tonal systems β€” failing on the quarter tones, maqam, raga traditions, and more that underpin hundreds of millions of listeners' musical cultures. And we still don't adequately understand how people actually perceive and evaluate AI-generated vs. human-made music β€” a gap that shapes trust, adoption, and every creative tool built on top of these models. An intimate fireside with two practitioners working on different frontiers of what music AI still gets wrong. 🎀 The panel β€’ Jad Al Masri β€” Founder & CEO, Motif Technologies (Techstars '25). Violinist, Composer, Creative Director, Technologist, Lawyer, Writer. His team's microtonal AI bias and maqam evaluation research is headed to the AI Music Summit in Berklee, ISMIR 2026 (Abu Dhabi), and NeurIPS 2026. β€’ Rithik Kundu β€” Manager at Joker Deck Ventures. Researcher and producer bridging music, tech, and people. NYU Music Tech '26 and GenAudio & AI lead, focused on how listeners perceive and evaluate AI-generated vs. human-made music. Recently presented his research at Sony AI. 🎯 What we'll get into β€’ The cultural blind spot β€” why current music AI fails on non-Western music, and why it's an evaluation problem before it's a data problem β€’ The perception gap β€” how listeners actually tell AI from human, and what that means for every product built on top of generative models β€’ The commercial case for depth β€” where real differentiation lives as generative fidelity becomes table stakes β€’ What's next β€” the tools, datasets, and frameworks that need to exist for music AI to serve creators and listeners across traditions 🍷 The format ~90 minutes total. 25 min networking β†’ 35 min fireside + audience Q&A β†’ 30 min networking. Drinks provided. πŸ’‘ Who this is for This event is a must if you're investing in audio AI, building in music tech, researching creative computing, engineering the tools, or making music that doesn't fit neatly into a 12-tone system. You'll spend an evening with two practitioners who live in this field β€” not pontificating about it β€” dig into the specific technical and cultural problems most panels gloss over, and leave with real connections to the founders, researchers, engineers, and creators shaping what music AI actually sounds like next. πŸ“ Part of Boston Tech Week 2026 Β· May 26–31 Spots are limited. RSVP early β€” we'll confirm closer to the date. This event is a part of #BOSTechWeekβ€”a week of events hosted by VCs and startups to bring together the tech ecosystem. Learn more at www.tech-week.com.

Guest List

10Β Going
Κ˜β€ΏΚ˜
α΅”β—‘α΅”
α΅”β–Ώα΅”

Restricted Access

Only RSVP'd guests can view event activity & see who's going
Already RSVP'd?Β Sign in