When Midjourney opened to the public in early 2022, the results were breathtaking, but unmistakably early.
- Noemi Kaminski
- Sep 27
- 1 min read
The first image here (left) shows that stage perfectly: imaginative and haunting, yet clearly born of algorithms still finding their footing.
Skin tones have a painterly glaze, foliage melts into brush-like strokes, and the lighting hints at depth without obeying real-world physics.
It was art, but it wasn’t photography.
Fast-forward to today (right).
The freckles, the hairline flyaways, the diffraction of light through those butterfly-wing leaves, every detail lands with optical precision.
It’s still fantasy, but it feels captured, not generated.
Model Progress ≠ Automatic Realism
Midjourney’s versions (v1/v2 → v6) brought sharper detail, but realism isn’t just an upgrade button. The engine improved its understanding of light, texture, and depth, yet results still depend on how we guide it.
Prompt Engineering as a Creative Skill
Early prompts were short—“forest queen, gold crown.”
Today, we speak in a new visual language: lens length, volumetric lighting, color temperature, depth-of-field cues. The richer the vocabulary, the more the model can mimic reality.
Designers Are Now Directors
Working with AI is less like using a paintbrush and more like running a film set. You decide the shot list, lighting plan, and post-processing, then the model executes.
Generative AI isn’t just making prettier pictures; it’s collapsing the gap between concept and camera.
Understanding how to describe light, physics, and composition is now as important as understanding typography or color theory.
If your team is exploring AI visuals, ask:
Are we feeding the model enough real photographic context?
Do we treat prompts like creative briefs or casual guesses?
The difference between 2022’s “rough magic” and today’s photoreal fantasy isn’t JUST technology, it’s also technique.





Comments