Nice perspective. I think a lot of power users of these LLMs have figured out how to start from a close approximation of their own distribution via prompting + context engineering and going from there. Ah! first I have heard of the term "inter-model homogeneity", but makes sense, you can see the same with diffusion models of the same size/generation, you get close to similar outputs for the same text prompts across models.
Nice perspective. I think a lot of power users of these LLMs have figured out how to start from a close approximation of their own distribution via prompting + context engineering and going from there. Ah! first I have heard of the term "inter-model homogeneity", but makes sense, you can see the same with diffusion models of the same size/generation, you get close to similar outputs for the same text prompts across models.