You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Paper: Regression with Large Language Models for Materials and Molecular
Authors: Ryan Jacobs, Maciej P. Polak, Lane E. Schultz, Hamed Mahdavi, Vasant
Abstract: We demonstrate the ability of large language models (LLMs) to performmaterial and molecular property regression tasks, a significant deviation fromthe conventional LLM use case. We benchmark the Large Language Model Meta AI(LLaMA) 3 on several molecular properties in the QM9 dataset and 24 materialsproperties. Only composition-based input strings are used as the model inputand we fine tune on only the generative loss. We broadly find that LLaMA 3,when fine-tuned using the SMILES representation of molecules, provides usefulregression results which can rival standard materials property predictionmodels like random forest or fully connected neural networks on the QM9dataset. Not surprisingly, LLaMA 3 errors are 5-10x higher than those of thestate-of-the-art models that were trained using far more granularrepresentation of molecules (e.g., atom types and their coordinates) for thesame task. Interestingly, LLaMA 3 provides improved predictions compared toGPT-3.5 and GPT-4o. This work highlights the versatility of LLMs, suggestingthat LLM-like generative models can potentially transcend their traditionalapplications to tackle complex physical phenomena, thus paving the way forfuture research and applications in chemistry, materials science and otherscientific domains.
Reasoning: produce the answer}. We start by examining the title, which mentions "Large Language Models" (LLMs). This indicates that the paper involves language models. Next, we look at the abstract, which discusses the use of LLMs for material and molecular property regression tasks. The abstract further elaborates on the performance of LLaMA 3, a specific large language model, in these tasks. Given that the paper focuses on the application and performance of a large language model, it is indeed about a language model.
The text was updated successfully, but these errors were encountered:
Paper: Regression with Large Language Models for Materials and Molecular
Authors: Ryan Jacobs, Maciej P. Polak, Lane E. Schultz, Hamed Mahdavi, Vasant
Abstract: We demonstrate the ability of large language models (LLMs) to performmaterial and molecular property regression tasks, a significant deviation fromthe conventional LLM use case. We benchmark the Large Language Model Meta AI(LLaMA) 3 on several molecular properties in the QM9 dataset and 24 materialsproperties. Only composition-based input strings are used as the model inputand we fine tune on only the generative loss. We broadly find that LLaMA 3,when fine-tuned using the SMILES representation of molecules, provides usefulregression results which can rival standard materials property predictionmodels like random forest or fully connected neural networks on the QM9dataset. Not surprisingly, LLaMA 3 errors are 5-10x higher than those of thestate-of-the-art models that were trained using far more granularrepresentation of molecules (e.g., atom types and their coordinates) for thesame task. Interestingly, LLaMA 3 provides improved predictions compared toGPT-3.5 and GPT-4o. This work highlights the versatility of LLMs, suggestingthat LLM-like generative models can potentially transcend their traditionalapplications to tackle complex physical phenomena, thus paving the way forfuture research and applications in chemistry, materials science and otherscientific domains.
Link: https://arxiv.org/abs/2409.06080
Reasoning: produce the answer}. We start by examining the title, which mentions "Large Language Models" (LLMs). This indicates that the paper involves language models. Next, we look at the abstract, which discusses the use of LLMs for material and molecular property regression tasks. The abstract further elaborates on the performance of LLaMA 3, a specific large language model, in these tasks. Given that the paper focuses on the application and performance of a large language model, it is indeed about a language model.
The text was updated successfully, but these errors were encountered: