Tag: ai

  • The Spectrum of Authorship

    Authorship and Collaboration with AI

    I’ve been thinking a lot lately about authorship when it comes to making creative work through collaboration with AI. I’ve been particularly interested in how authorship changes once we start relying more heavily on technology to make our work.

    There’s a spectrum, I think, that runs from one extreme to the other. On one end is the composer who creates something entirely from themselves; literally just their body and their voice, no technology involved. They might sing, clap, and stomp in a purely improvised performance, creating an original work that is entirely ‘human’. As we move along the spectrum, we start introducing tools and technologies that extend what the body can do.

    At these early ends of the spectrum, that might be the human–machine collaborations of a violin or a cello. The performer still feels like the author of the work, but their authorship is now distributed across their body and the instrument. The music is only possible through their interaction with that tool.

    Then we move into the world of recording. Technologies like microphones, tape machines, and DAWs allow us not only to capture sound but to shape and reorganise it. Here, authorship starts to spread out a little further. You might record fragments from synthesisers, field recordings, speech, or existing works, rearrange them, sculpt them, and construct a piece that exists mostly through the editing and transformation of material. The composition becomes a kind of organisation of material — structuring sound to give it some aesthetic meaning. In these contexts, the composer’s work entirely relies on the capabilities of modern technologies.

    Tools, Systems, and Co-Creation

    Things become more interesting when we get to technologies that don’t just record or process sound but actually start generating or influencing the musical material. MIDI effects in a DAW are a perfect example of this. They operate not on the level of concrete audio (as synthesisers and audio effects do), but on the level of abstract musical content — notes, rhythms, and chords. It’s possible to send a single note into a rack of MIDI effects, automate a range of parameters, and end up with a sophisticated chord sequence using inversions, borrowed chords, and extensions. The composer here is engaging in complex approaches to technology. But who is making the chords?

    Or consider an arpeggiator, which transforms a single chord into a patterned melody. It’s not just amplifying the human input; it’s creating new abstract musical content (the notes rather than the sound of those notes).

    The technology suggests material, and the composer reacts to it in a form of collaboration — approving it (keeping it in the work) or rejecting it (trying another set of parameters). The composer can unexpectedly stumble upon highly sophisticated ideas through these sorts of processes. They’re relying on the technology to produce their complexity. Is this any different from a composer sitting at a piano, and importantly, relying on the piano to find their complex harmonic sequences?

    These sorts of systems blur the line between tool and collaborator. We’re still inclined to say that the resulting piece belongs to, and is authored by, the composer, even though the technology is now contributing directly to the creation of the material itself.

    Further along the spectrum, we get to a stage where we start working with technologies that behave more like agents — systems that can generate musical ideas on their own, improvise, or respond to us in real time. These could be algorithmic improvisers, generative systems, or AI collaborators.

    At this point, the question of authorship starts to unravel further. Who is the author here? Is it the composer, who initiated and guided the process? Is it shared with the coder or designer who built the system? What about the AI itself, which is now capable of producing new abstract and concrete material? We could say that it’s still the composer’s work — another case of using technology to extend creative capacity — but it does feel slightly different. There’s an agency to the process that pushes back, that seems to create with the composer-performer rather than for them.

    Sampling, Assembly, and the Role of the Composer

    Running parallel to these sorts of approaches is the culture of sampling. You can make a track entirely out of material from Splice — a drum loop from one person, a chord progression from another, and a melody from someone else. In these scenarios, almost every building block of the music comes from other creators — not to mention the technologies that went into creating and shaping those materials. Yet the sense of authorship still rests with the one who assembles it.

    This kind of authorship is about reorganisation: curating, reframing, and recomposing pre-existing materials. It’s not unlike crate-digging or collage. Sure, the composer is not inventing the raw materials, but they’re reorganising them into new configurations, giving them new contexts and meanings, often drastically different from their origins. Authorship here becomes less about creating from nothing and more about the composer’s methods of moving things around — how they impose structure, taste, and intention.

    In these settings, the composer engages with technologies or the processing of sourced material to create the core ideas and sounds. But their authorship comes from the assembly of these into musical structures.

    Harvesting Authorship in the Ecosystem of Creativity

    At the furthest end of the spectrum is the artist who types a prompt into an AI model, waits a few seconds, and receives a fully formed track — a finished .wav file that they can release immediately. Here, the idea of authorship becomes extremely fragile. Who made it?

    It doesn’t quite make sense to call the human the ‘composer’ of the work. The creative labour has been abstracted away; the AI is the one producing the abstract and concrete musical materials and organising them into a musical structure. The human is left as the initiator, prompter, or commissioner.

    But even here, it’s not completely clear-cut. If the AI outputs something editable — an entire DAW project, a SuperCollider patch, or even a .wav file that is then split into stems — the human can intervene, reshape it, and make it their own, potentially clawing back some authorship. In other words, the artist begins to harvest authorship back from the system. They inject themselves into the material, react to it, transform it, and in doing so, reclaim a sense of ownership. The process becomes a dialogue: a push and pull between automation and intention.

    How is this process different from getting a sketch or full piece sent over from a collaborator, which the composer then pulls apart, edits, remixes, and makes ‘their own’? (I’m referring here to the differences concerning authorship, not the morals of replacing human collaboration of this kind with AI-human collaboration.)

    Harvesting authorship describes the act of taking something that wasn’t entirely yours to begin with and imprinting yourself upon it through labour, curation, and interpretation. The more you interact, the more you reclaim.

    Across this whole spectrum, from singing with your body to collaborating with generative systems, the underlying question doesn’t really change. Modern techniques of music-making have pushed us further away from that human-only end of the spectrum. But composition is still about how much of yourself you put into the process, and how much the system gives back. What shifts is where the creativity sits, and what forms it takes — in the body, in the workflow, in the code, or in the back-and-forth between the composer and their technology.

    It’s also not just technology that we interact with in the creative process. Consider the interplay between the composer and the spaces they compose with in mind, or the audience members themselves. What about the composer drawing inspiration from biophonic and geophonic sources — birdsong, thunder, waves? Music-making is thus never a single-creator scenario. There is no single, individual author. Authorship, in the sense of ‘who made this?’, is a question of a vast ecosystem of culture, environment, and technology.

    Maybe authorship isn’t about who made what from scratch, but about how creative intentions move through systems. It’s less about purity or originality, and more about interaction, orientation, and the ways we steer complexity into coherence — how an author of a creative work takes a set of inputs as material and shapes them into something aesthetically valuable.

    In that sense, using AI in the creative process isn’t the ‘end of authorship’. It’s a change, for sure, but it’s really just another point on a spectrum that composition has always existed on.

  • AI Collaboration to Build an Album Art Generator

    I used AI in some interesting ways over the past couple of days to do two main tasks: edit writing I had done, and build a website for making generative graphics for album art. The former is probably not so interesting anymore (which is crazy to think, given how novel these workflows are), but the latter definitely was. Both of these collaborations have been interesting in exploring the spectrum between non-use and overuse of AI.

    Non-Use and Overuse of AI

    Non-use, to me, feels a little short-sighted in some settings, as it denies the possibility for augmenting my skill set to do new things. There are absolutely times for non-use, but I personally definitely want to avoid total Luddism. Overuse, on the other hand, is basically getting the AI to make the entire thing for you — the article, the image, the song, the code script. AI can be overused in these ways, and then you can pass the products off as your own. But even though the AI created the thing, isn’t it still ‘your own’? Or do you have to create the thing entirely yourself in order to say it is, in fact, ‘your own’? These sorts of questions relate to areas outside of AI, such as the use of audio samples from platforms like Splice, or stock images on Unsplash.

    I should say that this is not an exercise in proving overuse to be outright Bad — like most things, there’s nuance to be emphasised. Instead, I’m simply exploring my experience of different uses of the technology — what does it feel like to not use it at all? Is it still rewarding to use AI to entirely create the thing? How can there be a balance of getting the reward of creating something while still leveraging the capabilities of the AI?

    Collaborating with AI

    • Writing

    What I was exploring in my two uses of AI yesterday was using it in an assistive way. For the written work, I wrote the draft, and the AI went through it and pointed out possible ways of editing the writing, fixing errors, and identified wrong information (in my case, I got the author of a book wrong). In this instance, the AI was acting in the same way that a human editor acts. I went through a very similar process when I wrote my PhD: passing the draft I had written to the editor, waiting a month, and then receiving a document full of suggestions and fixes. The downsides of this process are that it took a long time, and there were some errors that the editor had made in their suggestions (something to do with the reference style, from memory). The pros, however, were that I gave someone a job — the work employed them, and gave them money for their labour. And, while I said it was a con before, I actually enjoyed that time off and away from the thesis to gather my thoughts about it, and to approach it again with fresh eyes and ideas. By using an AI to edit the document, I am effectively avoiding getting a human editor to do a job.

    If I were to get the AI to write the entire article myself, I would not develop any of my writing or thinking skills. Through using AI in more of an assistive way, I am engaging abilities through the act of writing the draft and editing it, constantly practising my writing and thinking skills.

    It comes down to this core question: do I want the thing done, or do I want to do the thing?

    In using AI, I am trading some work off to it, but, importantly, I’m able to manage how much of this outsourcing I am doing.

    • Programming

    The other way I was using AI was by building a small program for creating generative visual art pieces for album covers, using the traditional generative art concepts/techniques. In generative art, the artist creates a set of rules and processes which then execute to produce the final art piece, rather than creating the finished piece directly. Each run yields a unique piece, generated within the constraints of the rules laid out in the system. These sorts of systems can be built using code, but I have no experience writing code, so I decided to talk ChatGPT through my ideas for the program, and see how it went. The very first program it created worked very well, generating images exactly like what I was after. The program had a few sliders to adjust parameters like Density and Stroke Weight, and allowed me to select which types of shapes it would use. An element of randomness was implemented, and pressing the ‘Regenerate’ button produced a new image each time, under the same core rules. This allows me to generate a cohesive set of images that share similar characteristics but are individually unique:

    Two main issues arose from my minimal coding experience. Firstly, I could not easily edit or debug the generated program myself. When I prompted ChatGPT for fixes, its accuracy was sometimes inconsistent, often leaving me unable to add or alter elements. This collaborative process, however, became a learning experience. ChatGPT responded to me as if I was a beginner, rather than a completely clueless coder. This pushed me slightly beyond my capabilities, developing some of my understandings of how code works. I did, however, struggle at times to find where to paste the new code, so I asked ChatGPT to tell me what the old code looked like so that I could find it and replace it with the new code.

    I obviously didn’t feel like I had created the program myself. Sure — the artworks it produced felt sort of like mine, but the program itself didn’t. If I had coded that program, I would feel far more rewarded every time it produced an artwork.

    Reward in the Creative Process; Ownership

    But is this much different to, say, a person who works in woodworking, doing most things by hand, but then acquiring a particular machine that allows them to do so much more? It’s still creative work, but now the person is relying on a machine to do some of the work that they originally wouldn’t have been able to do themselves. Is there much of a difference here?

    (Something I did observe was that it did drive me to really want to learn to code. I’ve been interested in other forms of programming using objects in platforms like MaxMSP and Bitwig’s The Grid, but I’ve never fully taken the plunge with learning to code. That could be a side project I undertake this summer.)

    Again, it comes back to the core question: do I want to have the thing done, or do I want to do the thing?

    Do I want to learn the techniques, put them to use, fail, succeed, learn and feel ownership over my creations? For sure. But is there also a bit of joy in having this program in front of me that has been made specifically for me, based off my ideas? Absolutely.

    I don’t think it’s black or white. Having the AI simply produce the generative art images itself, and then calling them my own… that feels far more empty. In the same way, getting the AI to write the entire article, or getting it to produce an entire piece of music, seems like too much outsourcing to feel much reward in, and connection to, what has been created. There’s very little creative joy in those types of processes.

    There is something that feels good about being able to do things ourselves. Sure, we can store information in a personal knowledge management program like Obsidian or Notion, building a large collection of notes about our interest. Or we can just say, ‘Hey, it’s on the internet; what’s the need to remember these things?’. But it feels good to know the things yourself: to hold the ideas in your head, and be able to merge them and explore the connections yourself. There’s a self-sufficiency that comes from that. It feels good to learn new things, and be able to do new capabilities and skills. It feels good to be very good at something. As a software update for a phone makes it a more capable device, going through skill- or knowledge-development processes feels good and deeply rewarding. Gaining new capabilities is one of the things we praise in our culture: development, growth, maturity, advancement. Think of Neo in The Matrix gaining the capabilities of Kung Fu fighting. Think of the montages of characters in sports films, training hard, struggling, falling, getting up again, training, training, training, and eventually getting very good at what they struggled with before. These sorts of stories permeate in our culture because they align with a core element of modern experience: development and expanding capabilities.

    AI Augmenting Capabilities

    A major part of this is that I can use AI to help me do things I can’t do on my own, rather than getting it to do things that I can and want to do, such as writing out my ideas. It’s important to be aware that whatever I get ChatGPT to do, I won’t get practice in. If I get it to write out my ideas (for example, brainstorm something, or write out an entire article), then I won’t get practice in thinking and converting ideas to written words, which I see as an extremely valuable ability. If I get it to edit my writing, however, I will get practice in writing the ideas and some editing, but I won’t get practice in the proper fine-toothed-comb editing of writing. But this would be the same case as if I worked with an editor. If I get it to write code for programs based on my ideas, I won’t get practice coding. However, I do feel like I learnt a bit about code yesterday by working alongside the AI, copying and pasting chunks of code and looking around the script. I didn’t learn anywhere near the amount I would have if I had written the script myself, but that would take me a very long time to be able to do so. This isn’t a bad thing — learning is supposed that takes time. But this was a different experience to traditional approaches to learning: I could immediately create things of higher complexity, while learning how code works in the process.

    But the counter to all of this hyper-optimism is that these positive outcomes will only occur if users are aware of AI’s potential to do the exact opposite: to limit our capabilities, expressive capacities and creativity, to cut us off from opportunities, and to raise new barriers. Over-reliance on the technology will stop us from doing the things that allow for these positive outcomes, and will stunt our growth in developing our own skills and capabilities. Over-reliance will reduce users’ knowledge and mental capabilities, causing all sorts of issues in navigating the world due to under-education.

    Just like many past tools and technologies, AI is both a gift and a burden; it can both extend us, and hinder us. Which one of these it falls towards depends on the users’ modes of use.