Bridging Tradition and Technology: Robotics and AI Open a New Path for Classical Indian Music

Raghavasimhan Sankaranarayanan has over 200 album and film soundtrack credits to his name, and he has performed in more than 2,000 concerts across the globe. He has composed music across many genres and received numerous awards for his technical artistry on the violin. 

He is also a student at Georgia Tech, finishing up his Ph.D. in machine learning and robotics. 

One might wonder why a successful professional musician would choose to become a student again.

“I always wanted to integrate technology, music, and robotics because I love computers and machines that can move,” he said. “There’s been little research on Indian music from a technological perspective, and the AI and music industries largely focus on Western music. This bias is something I wanted to address.”

Sankaranarayanan, who began playing the violin at age 4, has focused his academic studies on bridging the musically technical with the deeply technological. Over the past six years at Georgia Tech, he has explored robotic musicianship, creating a robot violinist and an accompanying synthesizer capable of understanding, playing, and improvising the music closest to his heart: classical South Indian music.

The Essence of Carnatic Music

Carnatic music, a classical form of South Indian music, is believed to have originated in the Vedas, or ancient sacred Hindu texts. The genre has remained faithful to its historic form, with performers often using non-amplified sound or only a single mic. A typical performance includes improvisations and musical interaction between musicians in which violinists play a crucial role. 

Carnatic music is characterized by intricate microtonal pitch variations known as gamakas — musical embellishments that modify a single note’s pitch or seamlessly transition between notes. In contrast, Western music typically treats successive notes as distinct entities.

Out of a desire to contribute technological advancements to the genre, Sankaranarayanan set out to innovate. When he joined the Center for Music Technology program under Gil Weinberg, professor and the Center’s director, no one at Georgia Tech had ever attempted to create a string-based robot.

“In our work, we develop physical robots that can understand music, apply logic to it, and then play, improvise, and inspire humans,” said Weinberg. “The goal is to foster meaningful interactions between robots and human musicians that foster creativity and the kind of musical discoveries that may not have happened otherwise.”

The Brain and the Body

Sankaranarayanan conceptualizes the robot as comprising two parts: the brain and the body. The “body” consists of mechanical systems that require algorithms to move accurately, including sliders and actuators that convert electric signals into motion to produce the sound of music. The “brain” consists of algorithms that enable the robot to understand and generate music.

In robotic musicianship, algorithms interpret and perform music, but building these algorithms for non-Western music is challenging; far less data is available for these forms. This lack of representation limits the capabilities of robotic musicianship and diminishes the cultural richness diverse musical forms can offer. 

Classical algorithms would struggle to capture the nuances of Carnatic music. To address this, Sankaranarayanan collected data specifically to model gamakas in Carnatic music. Then, using audio from performances by human musicians, he developed a machine-learning model to learn those gamakas. 

“You may ask, ‘Why not just use a computer?’ A computer can respond with algorithms, but music’s physicality is vital,” Weinberg said. “When musicians collaborate, they rely on the visual cues of movement, which make the interaction feel alive. Moreover, acoustic sound created by a physical instrument is richer and more expressive than computer-generated sound, and a robot musician provides this.”

Sankaranarayanan built the robot incrementally. Initially, he developed a bow mechanism that moved across wheels; now, the robot violin uses a real bow for authentic sound production.

Developing a New Musical Language

Another challenge involves technologies like MIDI (Musical Instrument Digital Interface), a protocol that enables electronic musical instruments and devices to communicate and sync by sending digital information about musical notes and performances. MIDI, however, is based on Western music systems and is limited in its application to music with microtonal pitch variations such as Carnatic music.

So Sankaranarayanan and Weinberg developed their own system. Using audio files of human violin performances, the system extracts musical features that inform the robot on bowing techniques, left-hand movements, and pressure on strings. The software synthesizer then listens to Sankaranarayanan’s playing, responding and improvising in real time and creating a dynamic interplay between human and robot.

“Like in many other fields, bias also exists in the area of music AI, with many researchers and companies focusing on Western music and using AI to understand tonal systems,” Weinberg said. “Raghav’s work aims to showcase how AI can also generate and understand non-Western music, which he has achieved beautifully.”

Giving Back to the Community

Carnatic music and its community of musicians shaped Sankaranarayanan's musical sensibility, motivating him to give back. He is developing an app to teach Carnatic music to help make the genre more appealing to younger audiences.

“By merging tradition with technology, we can expand the reach of traditional Carnatic music to younger musicians and listeners who desire more technological engagement,” Sankaranarayanan said. 

Through his innovative work, he is not just preserving Carnatic music but also reshaping its future for a digital age, inviting a new generation to engage with its deep heritage.

 

Related Media

Click on image(s) to view larger version(s)

  • Raghav violin.png

  • Performance: Raghavasimhan Sankaranarayanan (Ph.D. Dissertation - School of Music)

For More Information Contact

Catherine Barzler, Senior Research Writer/Editor

catherine.barzler@gatech.edu