I used AI in some interesting ways over the past couple of days to do two main tasks: edit writing I had done, and build a website for making generative graphics for album art. The former is probably not so interesting anymore (which is crazy to think, given how novel these workflows are), but the latter definitely was. Both of these collaborations have been interesting in exploring the spectrum between non-use and overuse of AI.
Non-Use and Overuse of AI
Non-use, to me, feels a little short-sighted in some settings, as it denies the possibility for augmenting my skill set to do new things. There are absolutely times for non-use, but I personally definitely want to avoid total Luddism. Overuse, on the other hand, is basically getting the AI to make the entire thing for you — the article, the image, the song, the code script. AI can be overused in these ways, and then you can pass the products off as your own. But even though the AI created the thing, isn’t it still ‘your own’? Or do you have to create the thing entirely yourself in order to say it is, in fact, ‘your own’? These sorts of questions relate to areas outside of AI, such as the use of audio samples from platforms like Splice, or stock images on Unsplash.
I should say that this is not an exercise in proving overuse to be outright Bad — like most things, there’s nuance to be emphasised. Instead, I’m simply exploring my experience of different uses of the technology — what does it feel like to not use it at all? Is it still rewarding to use AI to entirely create the thing? How can there be a balance of getting the reward of creating something while still leveraging the capabilities of the AI?
Collaborating with AI
-
Writing
What I was exploring in my two uses of AI yesterday was using it in an assistive way. For the written work, I wrote the draft, and the AI went through it and pointed out possible ways of editing the writing, fixing errors, and identified wrong information (in my case, I got the author of a book wrong). In this instance, the AI was acting in the same way that a human editor acts. I went through a very similar process when I wrote my PhD: passing the draft I had written to the editor, waiting a month, and then receiving a document full of suggestions and fixes. The downsides of this process are that it took a long time, and there were some errors that the editor had made in their suggestions (something to do with the reference style, from memory). The pros, however, were that I gave someone a job — the work employed them, and gave them money for their labour. And, while I said it was a con before, I actually enjoyed that time off and away from the thesis to gather my thoughts about it, and to approach it again with fresh eyes and ideas. By using an AI to edit the document, I am effectively avoiding getting a human editor to do a job.
If I were to get the AI to write the entire article myself, I would not develop any of my writing or thinking skills. Through using AI in more of an assistive way, I am engaging abilities through the act of writing the draft and editing it, constantly practising my writing and thinking skills.
It comes down to this core question: do I want the thing done, or do I want to do the thing?
In using AI, I am trading some work off to it, but, importantly, I’m able to manage how much of this outsourcing I am doing.
-
Programming
The other way I was using AI was by building a small program for creating generative visual art pieces for album covers, using the traditional generative art concepts/techniques. In generative art, the artist creates a set of rules and processes which then execute to produce the final art piece, rather than creating the finished piece directly. Each run yields a unique piece, generated within the constraints of the rules laid out in the system. These sorts of systems can be built using code, but I have no experience writing code, so I decided to talk ChatGPT through my ideas for the program, and see how it went. The very first program it created worked very well, generating images exactly like what I was after. The program had a few sliders to adjust parameters like Density and Stroke Weight, and allowed me to select which types of shapes it would use. An element of randomness was implemented, and pressing the ‘Regenerate’ button produced a new image each time, under the same core rules. This allows me to generate a cohesive set of images that share similar characteristics but are individually unique:

Two main issues arose from my minimal coding experience. Firstly, I could not easily edit or debug the generated program myself. When I prompted ChatGPT for fixes, its accuracy was sometimes inconsistent, often leaving me unable to add or alter elements. This collaborative process, however, became a learning experience. ChatGPT responded to me as if I was a beginner, rather than a completely clueless coder. This pushed me slightly beyond my capabilities, developing some of my understandings of how code works. I did, however, struggle at times to find where to paste the new code, so I asked ChatGPT to tell me what the old code looked like so that I could find it and replace it with the new code.
I obviously didn’t feel like I had created the program myself. Sure — the artworks it produced felt sort of like mine, but the program itself didn’t. If I had coded that program, I would feel far more rewarded every time it produced an artwork.
Reward in the Creative Process; Ownership
But is this much different to, say, a person who works in woodworking, doing most things by hand, but then acquiring a particular machine that allows them to do so much more? It’s still creative work, but now the person is relying on a machine to do some of the work that they originally wouldn’t have been able to do themselves. Is there much of a difference here?
(Something I did observe was that it did drive me to really want to learn to code. I’ve been interested in other forms of programming using objects in platforms like MaxMSP and Bitwig’s The Grid, but I’ve never fully taken the plunge with learning to code. That could be a side project I undertake this summer.)
Again, it comes back to the core question: do I want to have the thing done, or do I want to do the thing?
Do I want to learn the techniques, put them to use, fail, succeed, learn and feel ownership over my creations? For sure. But is there also a bit of joy in having this program in front of me that has been made specifically for me, based off my ideas? Absolutely.
I don’t think it’s black or white. Having the AI simply produce the generative art images itself, and then calling them my own… that feels far more empty. In the same way, getting the AI to write the entire article, or getting it to produce an entire piece of music, seems like too much outsourcing to feel much reward in, and connection to, what has been created. There’s very little creative joy in those types of processes.
There is something that feels good about being able to do things ourselves. Sure, we can store information in a personal knowledge management program like Obsidian or Notion, building a large collection of notes about our interest. Or we can just say, ‘Hey, it’s on the internet; what’s the need to remember these things?’. But it feels good to know the things yourself: to hold the ideas in your head, and be able to merge them and explore the connections yourself. There’s a self-sufficiency that comes from that. It feels good to learn new things, and be able to do new capabilities and skills. It feels good to be very good at something. As a software update for a phone makes it a more capable device, going through skill- or knowledge-development processes feels good and deeply rewarding. Gaining new capabilities is one of the things we praise in our culture: development, growth, maturity, advancement. Think of Neo in The Matrix gaining the capabilities of Kung Fu fighting. Think of the montages of characters in sports films, training hard, struggling, falling, getting up again, training, training, training, and eventually getting very good at what they struggled with before. These sorts of stories permeate in our culture because they align with a core element of modern experience: development and expanding capabilities.
AI Augmenting Capabilities
A major part of this is that I can use AI to help me do things I can’t do on my own, rather than getting it to do things that I can and want to do, such as writing out my ideas. It’s important to be aware that whatever I get ChatGPT to do, I won’t get practice in. If I get it to write out my ideas (for example, brainstorm something, or write out an entire article), then I won’t get practice in thinking and converting ideas to written words, which I see as an extremely valuable ability. If I get it to edit my writing, however, I will get practice in writing the ideas and some editing, but I won’t get practice in the proper fine-toothed-comb editing of writing. But this would be the same case as if I worked with an editor. If I get it to write code for programs based on my ideas, I won’t get practice coding. However, I do feel like I learnt a bit about code yesterday by working alongside the AI, copying and pasting chunks of code and looking around the script. I didn’t learn anywhere near the amount I would have if I had written the script myself, but that would take me a very long time to be able to do so. This isn’t a bad thing — learning is supposed that takes time. But this was a different experience to traditional approaches to learning: I could immediately create things of higher complexity, while learning how code works in the process.
But the counter to all of this hyper-optimism is that these positive outcomes will only occur if users are aware of AI’s potential to do the exact opposite: to limit our capabilities, expressive capacities and creativity, to cut us off from opportunities, and to raise new barriers. Over-reliance on the technology will stop us from doing the things that allow for these positive outcomes, and will stunt our growth in developing our own skills and capabilities. Over-reliance will reduce users’ knowledge and mental capabilities, causing all sorts of issues in navigating the world due to under-education.
Just like many past tools and technologies, AI is both a gift and a burden; it can both extend us, and hinder us. Which one of these it falls towards depends on the users’ modes of use.