Comparative Evaluation of Prompting and Fine-Tuning for Applying Large Language Models to Grid-Structured Geospatial Data
By: Akash Dhruv , Yangxinyu Xie , Jordan Branham and more
Potential Business Impact:
Helps computers understand maps and time better.
This paper presents a comparative study of large language models (LLMs) in interpreting grid-structured geospatial data. We evaluate the performance of a base model through structured prompting and contrast it with a fine-tuned variant trained on a dataset of user-assistant interactions. Our results highlight the strengths and limitations of zero-shot prompting and demonstrate the benefits of fine-tuning for structured geospatial and temporal reasoning.
Similar Papers
Simplify-This: A Comparative Analysis of Prompt-Based and Fine-Tuned LLMs
Computation and Language
Makes complex writing easier to understand.
Dissecting Clinical Reasoning in Language Models: A Comparative Study of Prompts and Model Adaptation Strategies
Computation and Language
Helps doctors understand patient notes better.
Resource-Efficient Adaptation of Large Language Models for Text Embeddings via Prompt Engineering and Contrastive Fine-tuning
Computation and Language
Makes computers understand whole sentences better.