Skip to content Skip to footer
0 items - £0.00 0

Internal document: Google trained PaLM 2 on 3.6T tokens and 340B parameters, compared to 780B tokens and 540B parameters for the original PaLM in 2022 (Jennifer Elias/CNBC)