AI Lab PleIAs Releases Fully Open Dataset, as AMD, Ai2 Release Open AI Models (huggingface.co)
(Saturday November 16, 2024 @11:34AM (EditorDavid)
from the model-citizens dept.)
- Reference: 0175480313
- News link: https://news.slashdot.org/story/24/11/16/0326222/ai-lab-pleias-releases-fully-open-dataset-as-amd-ai2-release-open-ai-models
- Source link: https://huggingface.co/blog/Pclanglais/two-trillion-tokens-open
French private AI lab PleIAs "is committed to training LLMs in the open," they write in [1]a blog post at Mozilla.org . "This means not only releasing our models but also being open about every aspect, from the training data to the training code. We define 'open' strictly: all data must be both accessible and under permissive licenses."
Wednesday PleIAs announced they were releasing the largest open multilingual pretraining dataset, according to [2]their blog post at HuggingFace :
> Many have [3]claimed that training large language models requires copyrighted data , making truly open AI development impossible. Today, [4]Pleias is proving otherwise with the release of [5]Common Corpus (part of the AI Alliance Open Trusted Data Initiative) — the largest fully open multilingual dataset for training LLMs, containing over 2 trillion tokens of permissibly licensed content with provenance information (2,003,039,184,047 tokens).
>
> As developers are responding to pressures from new regulations like the EU AI Act, Common Corpus goes beyond compliance by making our entire permissibly licensed dataset freely available [6]on HuggingFace , with detailed documentation of every data source. We have taken extensive steps to ensure that the dataset is high-quality and is curated to train powerful models. Through this release, we are demonstrating that there doesn't have to be such a [heavy] trade-off between openness and performance.
>
> Common Corpus is:
>
> — Truly Open: contains only data that is permissively licensed and provenance is documented
>
> — Multilingual: mostly representing English and French data, but contains at least 1B tokens for over 30 languages
>
> — Diverse: consisting of scientific articles, government and legal documents, code, and cultural heritage data, including books and newspapers
>
> — Extensively Curated: spelling and formatting has been corrected from digitized texts, harmful and toxic content has been removed, and content with low educational content has also been removed.
>
>
> Common corpus builds on a growing ecosystem of large, open datasets, such as [7]Dolma , [8]FineWeb , [9]RefinedWeb . The Common Pile currently in preparation under the coordination of Eleuther is built around the same principle of using permissible content in English language and, unsurprisingly, there were many opportunities for collaborations and shared efforts. But even together, these datasets do not provide enough training data for models much larger than a few billion parameters. So in order to expand the options for open model training, we still need more open data...
>
> Based on [10]an analysis of 1 million user interactions with ChatGPT , the plurality of user requests are for creative compositions... The kind of content we actually need — like creative writing — is usually tied up in copyright restrictions. Common Corpus tackles these challenges through five carefully curated collections...
Last week AMD also released its first series of [11]fully open 1 billion parameter language models , AMD OLMo.
And last month VentureBeat reported that the non-profit Allen Institute for AI [12]had unveiled Molmo , "an open-source family of state-of-the-art multimodal AI models which outpeform top proprietary rivals including OpenAI's GPT-4o, Anthropic's Claude 3.5 Sonnet, and Google's Gemini 1.5 on several third-party benchmarks."
[1] https://future.mozilla.org/builders/news_insights/announcing-common-corpus-a-2-trillion-token-dataset-thats-fully-open-and-accessible/
[2] https://huggingface.co/blog/Pclanglais/two-trillion-tokens-open
[3] https://arstechnica.com/information-technology/2024/01/openai-says-its-impossible-to-create-useful-ai-models-without-copyrighted-material/
[4] https://pleias.fr/
[5] https://pleias.fr/
[6] https://huggingface.co/datasets/PleIAs/common_corpus
[7] https://arxiv.org/pdf/2402.00159
[8] https://arxiv.org/pdf/2406.17557
[9] https://arxiv.org/abs/2306.01116
[10] https://arxiv.org/pdf/2407.14933v2
[11] https://www.amd.com/en/developer/resources/technical-articles/introducing-the-first-amd-1b-language-model.html
[12] https://venturebeat.com/ai/ai2s-new-molmo-open-source-ai-models-beat-gpt-4o-claude-on-some-benchmarks/
Wednesday PleIAs announced they were releasing the largest open multilingual pretraining dataset, according to [2]their blog post at HuggingFace :
> Many have [3]claimed that training large language models requires copyrighted data , making truly open AI development impossible. Today, [4]Pleias is proving otherwise with the release of [5]Common Corpus (part of the AI Alliance Open Trusted Data Initiative) — the largest fully open multilingual dataset for training LLMs, containing over 2 trillion tokens of permissibly licensed content with provenance information (2,003,039,184,047 tokens).
>
> As developers are responding to pressures from new regulations like the EU AI Act, Common Corpus goes beyond compliance by making our entire permissibly licensed dataset freely available [6]on HuggingFace , with detailed documentation of every data source. We have taken extensive steps to ensure that the dataset is high-quality and is curated to train powerful models. Through this release, we are demonstrating that there doesn't have to be such a [heavy] trade-off between openness and performance.
>
> Common Corpus is:
>
> — Truly Open: contains only data that is permissively licensed and provenance is documented
>
> — Multilingual: mostly representing English and French data, but contains at least 1B tokens for over 30 languages
>
> — Diverse: consisting of scientific articles, government and legal documents, code, and cultural heritage data, including books and newspapers
>
> — Extensively Curated: spelling and formatting has been corrected from digitized texts, harmful and toxic content has been removed, and content with low educational content has also been removed.
>
>
> Common corpus builds on a growing ecosystem of large, open datasets, such as [7]Dolma , [8]FineWeb , [9]RefinedWeb . The Common Pile currently in preparation under the coordination of Eleuther is built around the same principle of using permissible content in English language and, unsurprisingly, there were many opportunities for collaborations and shared efforts. But even together, these datasets do not provide enough training data for models much larger than a few billion parameters. So in order to expand the options for open model training, we still need more open data...
>
> Based on [10]an analysis of 1 million user interactions with ChatGPT , the plurality of user requests are for creative compositions... The kind of content we actually need — like creative writing — is usually tied up in copyright restrictions. Common Corpus tackles these challenges through five carefully curated collections...
Last week AMD also released its first series of [11]fully open 1 billion parameter language models , AMD OLMo.
And last month VentureBeat reported that the non-profit Allen Institute for AI [12]had unveiled Molmo , "an open-source family of state-of-the-art multimodal AI models which outpeform top proprietary rivals including OpenAI's GPT-4o, Anthropic's Claude 3.5 Sonnet, and Google's Gemini 1.5 on several third-party benchmarks."
[1] https://future.mozilla.org/builders/news_insights/announcing-common-corpus-a-2-trillion-token-dataset-thats-fully-open-and-accessible/
[2] https://huggingface.co/blog/Pclanglais/two-trillion-tokens-open
[3] https://arstechnica.com/information-technology/2024/01/openai-says-its-impossible-to-create-useful-ai-models-without-copyrighted-material/
[4] https://pleias.fr/
[5] https://pleias.fr/
[6] https://huggingface.co/datasets/PleIAs/common_corpus
[7] https://arxiv.org/pdf/2402.00159
[8] https://arxiv.org/pdf/2406.17557
[9] https://arxiv.org/abs/2306.01116
[10] https://arxiv.org/pdf/2407.14933v2
[11] https://www.amd.com/en/developer/resources/technical-articles/introducing-the-first-amd-1b-language-model.html
[12] https://venturebeat.com/ai/ai2s-new-molmo-open-source-ai-models-beat-gpt-4o-claude-on-some-benchmarks/