You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
OpenELM was pretrained on public datasets. Specifically, our pre-training dataset contains RefinedWeb, PILE, a subset of RedPajama, and a subset of Dolma v1.6.
This is not surprising: Because of the vast amounts of data needed to train LLMs (basically a snapshot of the internet), all training datasets will contain copyrighted material. LLMs are sort of a snapshot of humanity’s knowledge. References to copyrighted characters like Superman, Captain Kirk, Donald Duck, Bugs Bunny etc etc are part of that collective knowledge and references to them might pop up just about anywhere in a dataset. Getting a snapshot of humanity’s knowledge that is free of such references would be as impossible as removing the sugar from a cake after it has been baked.
So while the project only mentions "publicly available datasets" and never makes any claims to be "free of copyrighted material", can any information be shared about the selection process that went into choosing the datasets that were used to train OpenELM?
The text was updated successfully, but these errors were encountered:
I'm very excited about the release of this model and the efforts the team went through to openly document seemingly every aspect of it. Thank you!
I wonder if any information can be given concerning the selection of training datasets. On https://machinelearning.apple.com/research/openelm it says:
More specifically, on https://github.com/apple/corenet/blob/main/projects/openelm/README-pretraining.md it says:
Digging into RefinedWeb on https://huggingface.co/datasets/tiiuae/falcon-refinedweb/viewer/default/train?q=nytimes.com, it contains content from sources like nytimes.com and cnn.com.
This is not surprising: Because of the vast amounts of data needed to train LLMs (basically a snapshot of the internet), all training datasets will contain copyrighted material. LLMs are sort of a snapshot of humanity’s knowledge. References to copyrighted characters like Superman, Captain Kirk, Donald Duck, Bugs Bunny etc etc are part of that collective knowledge and references to them might pop up just about anywhere in a dataset. Getting a snapshot of humanity’s knowledge that is free of such references would be as impossible as removing the sugar from a cake after it has been baked.
So while the project only mentions "publicly available datasets" and never makes any claims to be "free of copyrighted material", can any information be shared about the selection process that went into choosing the datasets that were used to train OpenELM?
The text was updated successfully, but these errors were encountered: