Designing Sound with AI: The Art of Music Prompting

How translating influence, emotion, and iteration into language creates music with Suno and ChatGPT

Over the past few weeks, I’ve been exploring how AI music platforms like Suno.com can extend creativity, not replace it. The process has been less about pressing a button and more about designing prompts with intent—moving from raw influence to expressive, machine-readable instructions that generate new, surprising tracks.

Step 1: Identify the Source Influence

Every prompt began with a reference point: an artist, album, or song. Instead of imitating directly, I studied the DNA of the sound: instrumentation, rhythm, vocal delivery, production style. The goal was to understand what made it feel the way it did.

Step 2: Translate Sound into Language

Sound became structured description: BPM, chord progressions, textures, rhythms, vocal qualities. This mix of technical detail and poetic framing helped create prompts that machines could act on but still carried human expressiveness.

Step 3: Refine Iteratively

Each draft stripped away clichés, sharpened imagery, and emphasized contrasts. A line like “fragile vocals over dusty breakbeats” is both evocative and actionable. Dozens of variants emerged before landing on concise, 1000-character prompts that guided the AI without constraining it.

Step 4: Capture the Emotional Core

Beyond technique, the real goal was to capture emotion: melancholy, tension, playfulness, urgency. By encoding feeling into prompts, the music created with AI resonated more deeply with human listeners.

The Outcome

The result is a library of style prompts that live at the intersection of music criticism, poetry, and production notes. They are structured enough for AI to use, but evocative enough to feel like art in their own right.

I’ve shared several examples ranging from UNKLE’s cinematic trip-hop to Bad Religion’s anthemic punk, The Cure’s atmospheric melancholy, and DJ Shadow’s cinematic beat craft. Each prompt acts as both instruction and inspiration, designed for others to remix, adapt, and make their own.

Why This Matters

This experiment highlights something bigger than music: AI creativity is less about automation and more about translation. Translating inspiration into structured language is the real creative act.

Just as in UX research or product design, the power comes from:

  • Identifying influences

  • Structuring insights

  • Iterating relentlessly

  • Capturing emotional resonance

The tools are new. The process — deep listening, careful articulation, and iterative refinement — is timeless.

Over the past few weeks, I’ve been exploring how AI music platforms like Suno.com can extend creativity, not replace it. The process has been less about pressing a button and more about designing prompts with intent—moving from raw influence to expressive, machine-readable instructions that generate new, surprising tracks.

Step 1: Identify the Source Influence

Every prompt began with a reference point: an artist, album, or song. Instead of imitating directly, I studied the DNA of the sound: instrumentation, rhythm, vocal delivery, production style. The goal was to understand what made it feel the way it did.

Step 2: Translate Sound into Language

Sound became structured description: BPM, chord progressions, textures, rhythms, vocal qualities. This mix of technical detail and poetic framing helped create prompts that machines could act on but still carried human expressiveness.

Step 3: Refine Iteratively

Each draft stripped away clichés, sharpened imagery, and emphasized contrasts. A line like “fragile vocals over dusty breakbeats” is both evocative and actionable. Dozens of variants emerged before landing on concise, 1000-character prompts that guided the AI without constraining it.

Step 4: Capture the Emotional Core

Beyond technique, the real goal was to capture emotion: melancholy, tension, playfulness, urgency. By encoding feeling into prompts, the music created with AI resonated more deeply with human listeners.

The Outcome

The result is a library of style prompts that live at the intersection of music criticism, poetry, and production notes. They are structured enough for AI to use, but evocative enough to feel like art in their own right.

I’ve shared several examples ranging from UNKLE’s cinematic trip-hop to Bad Religion’s anthemic punk, The Cure’s atmospheric melancholy, and DJ Shadow’s cinematic beat craft. Each prompt acts as both instruction and inspiration, designed for others to remix, adapt, and make their own.

Why This Matters

This experiment highlights something bigger than music: AI creativity is less about automation and more about translation. Translating inspiration into structured language is the real creative act.

Just as in UX research or product design, the power comes from:

  • Identifying influences

  • Structuring insights

  • Iterating relentlessly

  • Capturing emotional resonance

The tools are new. The process — deep listening, careful articulation, and iterative refinement — is timeless.

Listen On Spotify

Listen On Apple Music

© 2024 Tim Aidlin. All rights reserved of their respective owners.
All brands, screens, and assets used by permission of owners. Some examples available during live review, on request.

© 2024 Tim Aidlin and respective owners, used with permission.