We have been working with BERT[1], a natural language processing (NLP) AI model Google released a few years ago. BERT can be used for a number of NLP tasks, including multi-label classification.
Using gross pathology reports, we have training our own language models using BERT to be used in the fine-tuning of pathology-focused NLP models.
The graphs show perplexity and loss scores for 10, 15, and 25 epochs.
![](https://www.ai.uky.edu/wp-content/uploads/2020/08/image-3.png)
![](https://www.ai.uky.edu/wp-content/uploads/2020/08/image-4-1024x726.png)