Chat with WARC-GPT using this interface!
WARC-GPT
Base Model
Instructions
Use WARC-GPT to ask questions about the University of Kentucky’s CELT site (https://celt.uky.edu). Adjusting the maximum tokens will inform the chatbot how long its responses should be.
Adjusting the temperature will inform the chatbot how random/creative it’s response should be. A lower setting implies a more predictable and structured response while a higher temperature allows for the model to generate less predictable responses.
The left-hand panel shows an LLM which is using an adapter. This essentially means we’ve given it a text book to gather more information to answer your questions. The right hand panel is the general model which the adapter has been applied to, but without that specialized information.