We have been working with BERT[1],  a natural language processing (NLP) AI model Google released a few years ago.  BERT can be used for a number of NLP tasks, including multi-label classification.

Using gross pathology reports, we have training our own language models using BERT to be used in the fine-tuning of pathology-focused NLP models.

The graphs show perplexity and loss scores for 10, 15, and 25 epochs.




No responses yet

Leave a Reply

Your email address will not be published. Required fields are marked *