Memo Akten's Learning to See trains a neural network on classical paintings, then feeds it live webcam footage. Crumpled towels become Turner seascapes. Tangled cables transform into Van Gogh skies. The work isn't about the network's intelligence—it's about what happens when an artist treats a model as a collaborator with its own peculiar way of seeing.

This reframing matters. Much of the discourse around generative AI fixates on replacement: Will the machine do what the artist does, only faster? But artists working seriously with machine learning rarely describe their practice this way. They describe partnerships, negotiations, surprises.

What follows is a practical look at how ML functions as a creative medium rather than an automation tool. The interesting questions aren't about prompts or outputs. They're about model choice as material decision, dataset curation as authorship, and the iterative loops where human intuition and algorithmic suggestion shape each other into something neither could produce alone.

Model Selection as Material Decision

Choosing a machine learning model is closer to choosing a medium than choosing a tool. A StyleGAN trained on faces produces a different aesthetic logic than a diffusion model. RunwayML's interface invites different gestures than a Jupyter notebook running PyTorch. Each carries assumptions about what matters and what counts as a good result.

For artists entering this space, accessible frameworks have multiplied. ml5.js wraps TensorFlow models in a syntax that feels native to creative coders working in p5.js. Magenta extends ML into musical and visual generation with relatively gentle entry points. Hugging Face hosts thousands of pretrained models that can be queried with a few lines of Python. None of these require expertise in linear algebra to produce meaningful work.

But accessibility shouldn't be confused with neutrality. Every model embeds aesthetic priors from its training. A pose estimation model trained mostly on adults in well-lit studios will fail interestingly on dancers in shadow. A text-to-image model with strong photographic bias will resist your attempts at painterly outputs. These limitations aren't bugs—they're the grain of the material.

The artists doing the most compelling work tend to choose models the way a sculptor chooses stone. They study how the model fails. They look for the textures only this particular system produces. The question shifts from what can this do to what can only this do.

Takeaway

A model is not a transparent tool but a medium with its own aesthetic grain. The interesting work begins when you stop fighting that grain and start working with it.

Training as Curatorial Practice

When you train or fine-tune a model, you're not just optimizing parameters—you're declaring what the world looks like. Every image included is an assertion. Every image excluded is an editorial choice. The dataset becomes a portrait of the artist's attention.

Anna Ridler's Myriad (Tulips) made this explicit. She photographed and hand-labeled ten thousand tulips, then trained a GAN on the resulting dataset. The work foregrounds the labor and subjectivity baked into machine learning systems we usually treat as objective. Her tulips are her tulips—classified by her hand, shaped by her categories.

This curatorial dimension scales down to small projects. Fine-tuning a model on a hundred photographs of your grandmother's garden produces something irreducibly personal. The model learns her preferences for asymmetric arrangements, her color palette, the particular quality of light in her region. Run inference and you get not generic flowers but something inflected by her specific seeing.

The implications run deeper than personalization. Curating a training set forces you to articulate aesthetic positions you might never have stated explicitly. Why these images and not others? What do they share? What's the underlying logic? The dataset becomes a kind of manifesto written in examples rather than words.

Takeaway

Selecting training data is an act of authorship as fundamental as composition or color choice. What you feed the model is a statement about what deserves to be seen.

The Feedback Loop as Studio Practice

The richest creative use of machine learning rarely happens in a single generation step. It happens in loops—artist proposes, model responds, artist edits, model re-responds—until something emerges that surprises both parties. This iterative rhythm is closer to printmaking or darkroom photography than to typing a prompt and accepting the result.

Sofia Crespo's work with marine forms exemplifies this. She generates outputs, selects fragments she finds compelling, composites them, feeds the composites back through additional models, and iterates. The final pieces aren't model outputs—they're the residue of an extended dialogue. The model's contribution and her contribution become impossible to separate cleanly.

Building this kind of practice requires infrastructure. Tools like ComfyUI and node-based environments allow artists to chain models, route outputs as inputs, and construct workflows that branch and recombine. The interface itself becomes a kind of instrument, learnable through repetition the way a musician learns an effects chain.

What emerges from sustained loop-work is something neither pure human intention nor pure algorithmic generation. The artist develops intuitions about how the model will respond to particular inputs. The model becomes a partner whose tendencies you learn to anticipate, exploit, or productively resist. This is collaboration in the technical sense: two systems with different competencies producing something together.

Takeaway

Creative ML practice is a conversation, not a command. The depth comes from how many turns you take, not how clever your opening line is.

Machine learning as creative collaborator demands a shift in posture. The artist isn't operating a generation machine. They're choosing materials, curating examples, and entering extended dialogues with systems that have their own ways of responding.

This framing reclaims something the replacement narrative obscures: that artistic work has always involved partnership with materials that push back. Stone resists. Paint pools unpredictably. Code throws errors. Models hallucinate in their own particular ways.

The artists making lasting work with ML treat these tools the way artists have always treated their media—with curiosity about constraints, respect for the material's logic, and willingness to be changed by what they make.