I really like the cross-LM workflow. For example, you use o3 to suggest fixes, and then you copy o3's proposed fix, paste it into Gemini, and ask for a full implementation. Once implemented by Gemini, you show the updated code to o3 (and also any feedback from the runtime) and ask to criticize, propose further fixes, or confirm that all is good. Each model, used in isolation, eventually falls into self-repetition. By this back and forth with two models, I feel like I'm less stuck in such self-repetition loops. And Gemini codes so much faster than others!
32
9
1
251
Replying to @burkov
I've found you can do this with the same model if you just tell the model that the content was from a different model.

Jul 5, 2025 · 6:52 AM UTC

1
3
Replying to @BowsersaurusRex
Yes, but it's not as effective. If you take a copy of your code that Gemini just reproduces in the output saying it was "fixed" and just start a new conversation with this code, it will converge again very fast.
1
8
No, it's not as effective. But it's funny how fast it'll turn on itself if you simply say "you're a new model now, can you review what this other model was doing."
2