Nanopublications LDF server

Nanopublications

Search Nanopublications by triple/quad pattern

Matches in Nanopublications for { ?s ?p " Merging models trained for long with WIDEN When models were trained on a lot of data they diverged further from the baseline (e.g. in continual pretraining for additional languages), current merging methods underperform in this setting https://alphaxiv.org/pdf/2408.03092 @AlibabaGroup https://twitter.com/LChoshen/status/1823002789217493392/photo/1 How do you do that? Let's assume we update a matrix with a few models. Pick a pretrained model and consider the rest of the models as diff from it (task vectors) Normalize the row of each model, separating the normalization factor (magnitude) and direction (row) Now we weigh every row by how much it changed (higher = better) and average all together + some trick to sometimes keep the original weight so weights might not sum to 1. You can see how this follows recent findings about direction and size (e.g. https://x.com/prateeky2806/status/1727589818618523783) While the results in "just" merging are not changing that much, merging with a continually trained model (Sailor) that added many languages look quite good! https://twitter.com/LChoshen/status/1823002796259791276/photo/1 Criticism (@askalphaxiv didn't upload comment): There is a vast overclaiming calling Sailor a different pretrained model. Quite complex, hard to know if it will generalize and they only show a specific model. " ?g. }

Showing items 1 to 1 of 1 with 100 items per page.