# Justification for model type used The use of the BART model in named entity recognition tasks for the legal domain offers significant advantages, including its strong contextual understanding and generative capabilities, its ability to handle long-distance dependencies, its excellent generalisation performance and robustness through noise-reducing self-encoder pre-training, and its flexible input-output format, which allows it to excel in handling complex tasks. In our paper, the BART-Large model is used for experiments, with 12 layers in both the encoder and decoder. For a fair comparison, the pre-trained model with the same encoder and decoder layers as the BART-Large model is used in the comparative experiments. During the training phase, the batch size is set to 16, and the learning rate is set to 1e-5. Furthermore, the result generation in the training and inference phase is performed using the greedy search algorithm. # Evaluation method & Assessment metrics (justification) Entity prediction is considered correct when both entity span and entity type are correctly predicted. Precision, recall, and F1 score are considered as our evaluation metrics.