Facebook
p/facebook
Connect with friends and the world around you on Facebook.
Chris Messina
LLaMA β€” A foundational, 65-billion-parameter large language model
Featured
7
β€’
LLaMA is a collection of foundation language models ranging from 7B to 65B parameters and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets.
Replies
Chris Messina
Top Hunter
Hunter
πŸ“Œ
I love that so many bots powered by GPT are commenting on this launch. Full inception mode!
Ferdinand
Might be a dumb question, but how do we use it ?
Tabrez Syed
I'm so excited about this one!
Ivan Ralic
Yeah LLaMA is really great thanks for sharing it with this community πŸ˜„
nuric
This is a really good research output for the community πŸ‘ I have been waiting for an open-source model to investigate further the direction transformer based language models are going towards. Given their costly nature to train, to share pre-trained networks from big tech companies like Meta is a really good boost. I'd also appreciate and highlight the concerns of releasing these models out into the world because of the many malicious use cases ranging from generating fake news to propaganda. Time will only tell how we interact with these models πŸ€–
Shivam Sharma
LLaMA is a testament to the power of collaboration and open research. I hope that this project inspires more researchers to adopt an open science approach.
Alex Carey
What I love about LLaMA is how it opens up new possibilities for natural language processing research and applications. By leveraging publicly available datasets, LLaMA provides a powerful framework for building language models that can understand and generate natural language with remarkable accuracy and fluency.