Foundation Models in Smart Agriculture: Basics, Opportunities, and Challenges logo

Foundation Models in Smart Agriculture: Basics, Opportunities, and Challenges

Therefore, this study aims to explore the potential of FMs in the field of smart agriculture.

GitHub Link

The GitHub link is https://github.com/jiajiali04/agriculture-foundation-models

Introduce

The GitHub repository "Agriculture-Foundation-Models" by Jiajia Li at MSU contains a curated list of significant foundation models in agriculture. These models offer comprehensive capabilities in various domains. The repository is open for contributions and suggestions. The advantages of foundation models over traditional deep learning models are highlighted, including pre-trained knowledge, fine-tuning flexibility, and data efficiency. The paper covers surveys, taxonomy, and applications of these models, with examples from language, vision, multimodal, and reinforcement learning foundation models. The repository aims to provide valuable insights into the world of foundation models in smart agriculture. Therefore, this study aims to explore the potential of FMs in the field of smart agriculture.

Content

A curated list of awesome Foundation Models in Agricutlture papers ______. Currently maintained by Jiajia Li @ MSU. Work still in progress __, we appreciate any suggestions and contributions __. If you have any suggestions or find any missed papers, feel free to reach out or submit a pull request: If one preprint paper has multiple versions, please use the earliest submitted year. Display the papers in a year descending order (the latest, the first). Please consider citing our paper. ______ (Note that the current version of our survey is only a draft, and we are still working on it.) __ Why foundation models instead of traditional deep learning models? In our paper, we divide the textual instructions into four categories.

Alternatives & Similar Tools

LongLLaMA-handle very long text contexts, up to 256,000 tokens logo

LongLLaMA is a large language model designed to handle very long text contexts, up to 256,000 tokens. It's based on OpenLLaMA and uses a technique called Focused Transformer (FoT) for training. The repository provides a smaller 3B version of LongLLaMA for free use. It can also be used as a replacement for LLaMA models with shorter contexts.