Compare LLMs using this interface!
Adapter
Llama 3.0
Instructions
Adjusting the maximum tokens will inform the chatbot how long its responses should be.
Adjusting the temperature will inform the chatbot how random/creative it’s response should be. A lower setting implies a more predictable and structured response while a higher temperature allows for the model to generate less predictable responses.
The left-hand panel shows an LLM which is using an adapter. This essentially means we’ve given it a text book to gather more information to answer your questions. The right hand panel is the general model which the adapter has been applied to, but without that specialized information.