7.2.26

Nanobanana and Google Photos

When running some tests with Nanobanana in ComfyUI I asked it to make a picture of a grey cat drinking from a hosepipe on a slate roof. This is because the grey cat asks me to turn the tap on for her every morning. What alarmed me were the slate roof tiles in the result. Obviously I can't prove it, but I'm reasonably certain that these have been scraped from the multiple pictures of my own garden roof terrace in Google Photos.

Google emphatically deny that they have trained their AI on Google Photos. They say, "We don't train any generative AI models outside of Google Photos with your personal data in Google Photos." However, the wording here FEELS slippery. The phrase "with your personal data" might imply that the photos could be "anonymised" - stripped of geographical information etc. Or perhaps when they have trained the models inside Google Photos they then take the models and use them elsewhere? That would also satisfy that wording. [Edit: Maybe because some of these pictures were on Sick Veg - a WordPress remotely hosted website - they were scraped from there?

They certainly leapt far ahead of the pack with Nanobanana and Veo, and it's tempting to conclude that they allowed themselves to train their models on Google Photos. I'm sure that if I went through the fine print I would find that by using this service (even though I have paid for it for years with Google Workspace...), I had surrendered my rights to this data for their purposes.

Certainly there's an irony to me posting this on Google's Blogger (the original Sick Veg blog posts were all on WordPress so that's not the leak). But I would argue that what one posts on Blogger and YouTube is by definition for public consumption. It's a very tiny drama, but one which is so personal to me that it feels like an infringement on what is sacred.

AI Cat.

Real cats.

31.1.26

Boosters, Traddies, and Geeks.

I've written twice about AI on my LinkedIn account and on both occasions I deleted the post. So this will be my third-time lucky.


In the first post I described getting a prompt to replicate the view across me as I sit at my desk. The shot on the left is a photo; on the right is my attempt to replicate it in AI. Just using a text prompt I didn't do badly – but the process did get exasperating... This sort of thing I've found is much easier to achieve within node-based programmes.

 

In the second post I talked about this short by Akos Papp which was the first AI film I actually enjoyed. I segued into a spiel about how I don't see AI as "a tool" (a now generic description of its offering) - but instead like hiring a massive team of robots. I gave a nod to the idea that an animator in post-production is as close to a robot as you get.
 
I deleted both posts because my feed at LinkedIn is already overflowing with people writing about AI. However, just for my own purposes, I thought I would revisit the topic one more time in the cooler environment of a blog-post and clarify my own thinking about what's happening and where things are headed.

16.1.26

New Forms

Lulu gave me a lesson on her sewing machine which I've borrowed.

I made two of these forms as pillows. Organic cotton. Organic wool stuffing.










3.1.26

A Mac Called Walden



 
I bought this Mac for £50 a couple of years ago from an architecture practice in Islington. It was acquired to help my son edit his skateboard videos on DV. I have my fully legal, permanent licenses of Adobe Photoshop, Illustrator, and After Effects CS6 installed on it. Also installed are fully legal licenses of Final Cut Studio and Cinema 4D Studio. I made animations and videos for years with nothing more.