Current isotropic networks, similar to ConvMixer and imaginative and prescient transformers, have discovered important success throughout visible recognition duties, matching or outperforming non-isotropic convolutional neural networks (CNNs). Isotropic architectures are significantly well-suited to cross-layer weight sharing, an efficient neural community compression approach. On this paper, we carry out an empirical analysis on strategies for sharing parameters in isotropic networks (SPIN). We current a framework to formalize main weight sharing design selections and carry out a complete empirical analysis of this design area. Guided by our experimental outcomes, we suggest a weight sharing technique to generate a household of fashions with higher general effectivity, when it comes to FLOPs and parameters versus accuracy, in comparison with conventional scaling strategies alone, for instance compressing ConvMixer by 1.9x whereas enhancing accuracy on ImageNet. Lastly, we carry out a qualitative examine to additional perceive the conduct of weight sharing in isotropic architectures.