As you know, I spend a lot of time practicing with, and reviewing AI image and video generation tools. And if you’ve been reading this newsletter lately, you know that most of the popular platforms are actually aggregators of several different models. So platforms like Krea, OpenArt, and Freepik allow access to a dozen image generators and at least half a dozen video models. If you’re just getting started this is a great way to compare Flux, Imagen, and ChatGPT and find the tool that’s best for your workflow. And then you can extend this process by contrasting a variety of video generators. While there are limits, they are very generous and you could make a huge leap in your understanding for about $50. While not necessarily better, this is certainly a broader approach then using a dedicated tool like Runway or Midjourney. While these are both excellent, they are solitary, proprietary models.
Up until a week ago, I would have told you to go with Krea if you were going to commit to a platform. It’s fun to use, very fast, has a ton of models to choose from, and a generous credit allowance. However, on July 1st, I became a beta tester for Alisa by Genova Labs and I think that this is the best AI Platform I have used so far. While all the platforms are good and most are licensing existing models, the key differentiator is ease of use. Many web based AI platforms start small and quickly become crowded and overwhelmed by feature bloat. Leonardo is an example of a tool that was once great, but has become harder to use over time.
No surprises, you are met with a prompt box when you arrive on the front page. This is followed by a word cloud of all the different things you can make, which sorts and filters through a series of community projects. Alisa aims to be your all in one creative shop, extending beyond image and video and into writing, mini games, and simple websites.
However, it’s real winning feature is the uncrowded interface. Just enough whitespace and just few enough features means that I was up and running quickly without ever needing to look at any instructions or tutorials. While I’m sure they will grow and features will expand over time, I hope they don’t add too many, because this uncluttered look is a breeze to use. My workflow has been centered around generating images with the Flux Kontext model (great for consistent characters) and videos with the new Seedance model (great for cinematic motion). Here’s something I put together in five minutes as a proof of concept.
My only caveat is that I don’t know about pricing yet, but I assume it will be competitive with the major players. Head over to Alisa Labs and see if you can get on the waitlist. Here’s my prediction: By the end of July, all of the YouTubers will be telling you that this is the best new platform, and you can say you read it here, first.
Great Articles which Made Me Think
Rune Madsen, from Design Systems International, writes about “When Figma Starts Design Us:
However, over the course of the last five years, I’ve grown increasingly worried about what Figma is doing to the field of design by pushing designers toward an engineering-centric way of working.
Anna Arteeva, of Design Systems Collective, writes about how to integrate the new Vibe Coding tools into your existing Design System.
AI prototyping tools like Lovable, Bolt, V0, and Replit have made it incredibly fast to spin up new apps — but speed means little if what you’re building doesn’t match the reality of your product. Most teams aren’t starting from scratch; they’re working within established brands, component libraries, and real constraints. The real unlock isn’t just generating UI — it’s making these tools play nicely with your existing design system.
That’s it for today, I hope your summer is going well. Remember, if you’re getting some value out of this newsletter, you can forward it to your friends, and/or buy me a coffee.